Search Results: "sysv"

1 April 2017

Antoine Beaupr : My free software activities, February and March 2017

Looking into self-financing Before I begin, I should mention that I started tracking my time working on free software more systematically. I spend a lot of time on the computer, as regular readers of this blog might remember so I wanted to know exactly how much time was paid vs free work. I was already using org-mode's time clock system to keep track of my work hours, so I just extended this to my regular free software contributions, which also helps in writing those reports. It turns out that over 60% of my computer time is spent working on free software. That's huge! I was expecting something more along the range of 20 to 40% of my time. So I started thinking about ways of financing this work. I created a Patreon page but I'm hesitant into launching such a campaign: the only thing worse than "no patreon page" is "a patreon page with failed goals and no one financing it". So before starting such an effort, I'd like to get a feeling of what other people's experience with it are. I know that joeyh is close to achieving his goals, but I can't compare with the guy that invented git-annex or debhelper, so I'm concerned I wouldn't be able to raise the same level of funding. So any advice you have, feel free to contact me in private or in the comments. If you would be ready to fund my work, I'd love to know about it, obviously, but I guess I wouldn't get real numbers until I actually open up such a page... Now, onto the regular report.

Wallabako I spent a good chunk of time completing most of the things I had in mind for Wallabako, which I mentioned quickly in the previous report. Wallabako is now much easier to installed, with clearer instructions, an easier to use configuration file, more reliable synchronization and read status propagation. As usual the Wallabako README file has all the details. I've also looked at better integration with Koreader, the free software e-reader that forms the basis of the okreader free software distribution which has been able to port Debian to the Kobo e-readers, a project I am really excited about. This project has the potential of supporting Kobo readers beyond the lifetime that upstream grants it and removes a lot of proprietary software and spyware that ships with the Kobo readers. So I have made a few contributions to okreader and also on koreader, the ebook reader okreader is based on.

Stressant I rewrote stressant, my simple burn-in and stress-testing tool. After struggling in turn with Debirf, live-build, vmdebootstrap and even FAI, I just figured maybe it wasn't the best idea to try and reinvent that particular wheel: instead of reinventing how to build yet another Debian system build tool, maybe I should just reuse what's already there. It turns out there's a well known, succesful and fairly complete recovery system called Grml. It is a Debian Derivative, so all I needed to do was to stop procrastinating and actually write the actual stressant tool instead of just creating a distribution with a bunch of random tools shipped in. This allowed me to focus on which tools were the best to stress test different components. This selection ended up being: fio can also be used to overwrite disk drives with the proper options (--overwrite and --size=100%), although grml also ships with nwipe for wiping old spinning disks and hdparm to do a secure erase of SSD disks (whatever that's worth). Stressant still needs to be shipped with grml for this transition to be complete. In the meantime, I was able to configure the excellent public Gitlab CI service to provide ISO images with Stressant built-in as a stopgap measure. I also need to figure out a way to automate starting stressant from a boot menu to automate deployments on a larger scale, although because I have little need for the feature at this moment in time, this will likely wait for a sponsor to show up for this to be implemented. Still, stressant has useful features like the capability of sending logs by email using a fresh new implementation of the Python SMTPHandler (BufferedSMTPHandler) which waits for logging to complete before sending a single email. Another interesting piece of code in there is the NegateAction argparse handler that enables the use of "toggle flags" (e.g. --flag / --no-flag). I'm so happy with the code that I figure I could just share it here directly:
class NegateAction(argparse.Action):
    '''add a toggle flag to argparse

    this is similar to 'store_true' or 'store_false', but allows
    arguments prefixed with --no to disable the default. the default
    is set depending on the first argument - if it starts with the
    negative form (define by default as '--no'), the default is False,
    otherwise True.
    '''
    negative = '--no'
    def __init__(self, option_strings, *args, **kwargs):
        '''set default depending on the first argument'''
        default = not option_strings[0].startswith(self.negative)
        super(NegateAction, self).__init__(option_strings, *args,
                                           default=default, nargs=0, **kwargs)
    def __call__(self, parser, ns, values, option):
        '''set the truth value depending on whether
        it starts with the negative form'''
        setattr(ns, self.dest, not option.startswith(self.negative))
Short and sweet. I wonder why stuff like this is not in the standard library yet - maybe just because no one bothered yet? It'd be great to get feedback of more experienced Pythonistas on this one. I hope that my work on Stressant is complete. I get zero funding for this work, and have little use for it myself: I manage only a few machines and such a tool really shines when you regularly put new hardware online, which is (fortunately?) not my case anymore. I'd be happy, of course, to accompany organisations and people that wish to further develop and use such a tool. A short demo of stressant as well as detailed description of how it works is of course available in its README file.

Standard third party repositories After looking at improvements for the grml repository instructions, I realized there was no real "best practices" document on how to configure an Apt repository. Sure, there are tools like reprepro and others, but those hardly qualify as policy: they are very flexible and there are lots of ways to create insecure repositories or curl sh style instructions, which we of course generally want to avoid. While the larger problem of Unstrusted Debian packages remain generally unsolved (e.g. when you install any .deb file, it can get root on your system), it seemed to me one critical part of this problem was how to add a random third-party repository to your machine while limiting, as much as possible, what possible attackers could do with such a repository. In other words, to solve the more general problem of insecure .deb files, we also need to solve the distribution problem, otherwise fixing the .deb files themselves will be useless. This lead to the creation of standardized repository instructions that define:
  1. how to distribute the repository's public signing key (ie. over HTTPS)
  2. how to name suites and components (e.g. use stable and main unless you have a good reason, and explain yourself)
  3. recommend a healthy does of apt preferences pinning
  4. how to distribute keys (e.g. with a derive-archive-keyring package)
I've seen so many third party repositories get this wrong. For example, a lot of repositories recommend this type of command to intialize the OpenPGP trust path:
curl http://example.com/key.asc   apt-key add -
This has the following problems:
  • the key is transfered in plaintext and can easily be manipulated by an active attacker (e.g. a router on your path to the server or a neighbor in a Wifi cafe)
  • the key is added to the main trust root, which allows the key to authentify as the real Debian archive, therefore giving it all rights over all packages
  • since it's part of the global archive, it's difficult for a package to remove/add the key when a key rollover is necessary (and repositories generally don't provide a deriv-archive-keyring to do that process anyways)
An example of this are the Docker install instructions that, at least, manage to do this over HTTPS. Some other repositories don't even bother teaching people about the proper way of adding those keys. We settled for:
wget -O /usr/share/keyrings/deriv-archive-keyring.gpg https://deriv.example.net/debian/deriv-archive-keyring.gpg
That location was explicitly chosen to be out of the main trust directory, so that it needs to be explicitly added to the sources.list as well:
deb [signed-by=/usr/share/keyrings/deriv-archive-keyring.gpg] https://deriv.example.net/debian/ stable main
Similarly, we highly recommend users setup "apt pinning" to restrict what a given repository can do. Since pinning is so confusing, most people don't actually bother even configuring it and I have yet to see a single repo advise its users to configure those preferences, which are essential to limit what a repository can do. To keep configuration simple, we recommend this:
Package: *
Pin: origin deriv.example.net
Pin-Priority: 100
Obviously, for a single-package repository, the actual package name should be listed, e.g.:
Package: foo
Pin: origin deriv.example.net
Pin-Priority: 100
And the priority should probably be set to 1 unless you want to allow automatic upgrades. It is my hope that this design will get more traction in the years to come and become a de-facto standard that will be a key part in safely adding third party repositories. There is obviously much more work to be done to improve security when installing untrusted .deb files, and I encourage Debian developers to consider contributing to the UntrustedDebs discussions and particularly to the Teams/Dpkg/Spec/DeclarativePackaging work.

Signal R&D I spent a significant amount of time this month struggling with the Signal project on my phone. I'm still ambivalent on Signal: it's a centralized designed, too dependent on phone numbers, but I must admit they get a lot of things right and it's the only free-software platform that allows for easy-to-use, multi-platform videoconferencing that my family can use. I've been following Signal for a while: up until now, I had been using the LibreSignal rebuild of the official client, as it is distributed on a F-Droid repository. Because I try to avoid Google (proprietary) software on my phone, it's basically the only way I could even install Signal. Unfortunately, the repository is out of date and introduces another point of trust in the distribution model: now you not only need to trust the Signal authors to do the right thing, you also need to trust that F-Droid repo not to inject nasty code on your phone. I've therefore started a discussion about how Signal could be distributed outside of the Google Play Store. I'd like to think it's one of the things that led the Signal people to distribute an official copy of Signal outside of the playstore. After much struggling, I was able to upgrade to this official client and will be able to upgrade easily by just downloading the APK. (Do note that I ended up reinstalling and re-registering Signal, which unfortunately changed my secret keys.) I do hope Signal enters F-Droid one day, but it could take a while because it still doesn't work without Google services and barely works with MicroG, the free software alternative to the Google services clients. Moxie also set a list of requirements like crash reporting and statistics that need to be implemented on F-Droid's side before he agrees to the deployment, so this could take a while. I've also participated in the, ahem, discussion on the JWZ blog regarding a supposed vulnerability in Signal where it would leak previously unknown phone numbers to third parties. I reviewed the way the phone number is uploaded and, while it's possible to create a rainbow table of phone numbers (which are hashed with a truncated SHA-1 checksum), I couldn't verify the claims of other participants in the thread. For me, Signal still does the right thing with contacts, although I do question the way "read status" notifications get transmitted, but that belong in another bug report / blog post.

Debian Long Term Support (LTS) It's been more than a year working on Debian LTS, started by Raphael Hertzog at Freexian. I didn't work much in February so I had a lot of hours to catchup with, and was unfortunately unable to do so, partly because I was busy with other projects, and partly because my colleagues are doing a great job at resolving the most important issues. So one my concerns this month was finding work. It seemed that all the hard packages were either taken (e.g. my usual favorites, tiff and imagemagick, we done by others) or just too challenging (e.g. I don't feel quite comfortable tackling the LTS branch of the Linux kernel yet). I spent quite a bit of time trying to figure out what was wrong with pcre3, only to realise the "32" in the report was not about the architecture, but about the character width. Because of thise, I marked 4 CVEs (CVE-2017-7186, CVE-2017-7244, CVE-2017-7245, CVE-2017-7246) as "not-affected", since the 32-bith character support wasn't enabled in wheezy (or jessie, for that matter). I still spent some time trying to reproduce the issues, which require a compiler with an AddressSanitizer, something that was introduced in both Clang and GCC after Wheezy was released, which makes reproducing this fairly complicated... This allowed me to experiment more with Vagrant, however, and I have provided the Debian cloud team with a 32-bit Vagrant box that was merged in shortly after, although it doesn't show up yet in the official list of Debian images. Then I looked at the apparmor situation (CVE-2017-6507), Debian bug #858768). That was one tricky bug as well, since it's not a security issue in apparmor per se, but more an issue with things that assume a certain behavior from apparmor. I have concluded that Wheezy was not affected because there are no assumptions of proper isolation there - which are provided only starting from LXC 1.0 - and Docker is not in Wheezy. I also couldn't reproduce the issue on Jessie, but, as it turns out, the issue was sysvinit-specific, which is why I couldn't reproduce it under the default systemd configuration shipped with Jessie. I also looked at the various binutils security issues: as I reported on the mailing list, I didn't see anything serious enough in there to warrant a a security release and followed the lead of both the stable and Red Hat security teams by marking this "no-dsa". I similiarly reviewed the mp3splt security issues (specifically CVE-2017-5666) and was fairly puzzled by that issue, which seems to be triggered only the same address sanitization extensions than PCRE, although there was some pretty wild interplay with debugging flags in there. All in all, it seems we can't reproduce that issue in wheezy, but I do not feel confident enough in the results to push that issue aside for now. I finally uploaded the pending graphicsmagick issue (DLA-547-2), a regression update to fix a crash that was introduced in the previous release (DLA-547-1, mistakenly named DLA-574-1). Hopefully that release should clear up some of the confusion and fix the regression. I also released DLA-879-1 for the CVE-2017-6369 in firebird2.5 which was an interesting experiment: I couldn't reproduce the issue in a local VM. After following the Ubuntu setup tutorial, as I wasn't too familiar with the Firebird database until now (hint: the default username and password is sysdba/masterkey), I ended up assuming we were vulnerable and just backporting the patch after seeing the jessie folks push out a release just in case. I also looked at updating the ca-certificates package to deal with the pending WoSign/Startcom removal: I made an explicit list of the CAs that need to be removed after reviewing the Mozilla list. I also sent a patch for an unrelated issue where ca-certificates is writing to /usr/local (!!) in Debian bug #843722. I have also done some "meta" work in starting a discussion about fixing the missing DLA links in the tracker, as you will notice all of the above links lead to nowhere. Thanks to pabs, there are now some links but unfortunately there are about 500 DLAs missing from the website. We also discussed ways to Debian bug #859123, something which is currently a manual process. This is now in the hands of the excellent webmaster team. I have also filed a few missing security bugs (Debian bug #859135, Debian bug #859136), partly because I wanted to help the security team. But it turned out that I felt the script needed some improvements, so I submitted a patch to improve the script so it is easier to run.

Other projects As usual, there's the usual mixed bags of chaos: More stuff on Github...

28 March 2017

Axel Beckert: System Tray Icon to Monitor a Linux Software RAID Locally

About a year ago I bought a new workstation computer for myself at home. It s a Tuxedo XUX_Cube which is advertised as gaming PC. But I ordered a slightly atypical non-gamer configuration: Of course the box runs Debian. To be more precise, it runs Debian Sid with sysvinit-core as init system and i3 as window manager. As I usually have no monitoring clients on my laptops and private workstations, I rather often felt the urge to do a cat /proc/mdstat on that box. So at some point I wanted something like smart-notifier, but for Linux Software (MD) RAIDs. And since I found nothing, I did what Open Source guys usually do in such cases: I wrote it myself of course in Perl and called it systray-mdstat. First I wondered about which build system would be most suitable for that task, but in the end I once again went with Dist::Zilla for the upstream build system and hence dh-dist-zilla for the Debian packaging. Ideas for the actual implementation were taken from Wouter s fdpowermon for the systray icon framework in Perl and Myon s mdstat Xymon plugin for an already proven logic to parse /proc/mdstat. (Both, Wouter and Myon have stated in a GnuPG-signed e-mail that I copied less code than would validate their copyrights, so I was able to license it under a single license, namely GNU GPL version 3.) As of now, systray-mdstat is also available as package in Debian Unstable. It won t make it to Stretch as its first line of code has been written after the soft-freeze for Stretch was already in place.

14 February 2017

Arturo Borrero Gonz lez: About process limits

Graphs The other day I had to deal with an outage in one of our LDAP servers, which is running the old Debian Wheezy (yeah, I know, we should update it). We are running openldap, the slapd daemon. And after searching the log files, the cause of the outage was obvious:
[...]
slapd[7408]: warning: cannot open /etc/hosts.allow: Too many open files
slapd[7408]: warning: cannot open /etc/hosts.deny: Too many open files
slapd[7408]: warning: cannot open /etc/hosts.allow: Too many open files
slapd[7408]: warning: cannot open /etc/hosts.deny: Too many open files
slapd[7408]: warning: cannot open /etc/hosts.allow: Too many open files
slapd[7408]: warning: cannot open /etc/hosts.deny: Too many open files
[...]
[Please read About process limits, round 2 for updated info on this issue] I couldn t believe that openldap is using tcp_wrappers (or libwrap), an ancient software piece that hasn t been updated for years, replaced in many other ways by more powerful tools (like nftables). I was blinded by this and ran to open a Debian bug agains openldap: #854436 (openldap: please don t use tcp-wrappers with slapd). The reply from Steve Langasek was clear:
If people are hitting open file limits trying to open two extra files,
disabling features in the codebase is not the correct solution.
Obvoursly, the problem was somewhere else. I started investigating about system limits, which seems to have 2 main components: According to my searchings, my slapd daemon was being hit by the latter. I reviewed the default system-wide limits and they seemed Ok. So, let s change the other limits. Most of the documentantion around the internet points you to a /etc/security/limits.conf file, which is then read by pam_limits. You can check current limits using the ulimit bash builtin. In the case of my slapd:
arturo@debian:~% sudo su openldap -s /bin/bash
openldap@debian:~% ulimit -a
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 7915
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 2000
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited
This seems to suggest that the openldap user is constrained to 1024 openfiles (and some more if we check the hard limit). The 1024 limit seems low for a rather busy service. According to most of the internet docs, I m supposed to put this in /etc/security/limits.conf:
[...]
#<domain>      <type>  <item>         <value>
openldap	soft	nofile		1000000
openldap	hard	nofile		1000000
[...]
I should check as well that pam_limits is loaded, in /etc/pam.d/other:
[...]
session		required	pam_limits.so
[...]
After reloading the openldap session, you can check that, indeed, limits are changed as reported by ulimit. But at some point, the slapd daemon starts to drop connections again. Thing start to turn weird here. The changes we made until now don t work, probably because when the slapd daemon is spawned at bootup (by root, sysvinit in this case) no pam mechanisms are triggered. So, I was forced to learn a new thing: process limits. You can check the limits for a given process this way:
arturo@debian:~% cat /proc/$(pgrep slapd)/limits
Limit                     Soft Limit           Hard Limit           Units
Max cpu time              unlimited            unlimited            seconds
Max file size             unlimited            unlimited            bytes
Max data size             unlimited            unlimited            bytes
Max stack size            8388608              unlimited            bytes
Max core file size        0                    unlimited            bytes
Max resident set          unlimited            unlimited            bytes
Max processes             16000                16000                processes
Max open files            1024                 4096                 files
Max locked memory         65536                65536                bytes
Max address space         unlimited            unlimited            bytes
Max file locks            unlimited            unlimited            locks
Max pending signals       16000                16000                signals
Max msgqueue size         819200               819200               bytes
Max nice priority         0                    0
Max realtime priority     0                    0
Max realtime timeout      unlimited            unlimited            us
Good, seems we have some more limits attached to our slapd daemon process. If we search the internet to know how to change process limits, most of the docs points to a tool known as prlimit. According to the manpage, this is a tool to get and set process resource limits, which is just what I was looking for. According to the docs, the prlimit system call is supported since Linux 2.6.36, and I m running 3.2, so no problem here. Things looks promising. But yes, more problems. The prlimit tool is not included in the Debian Wheezy release. A simple call to a single system call was not going to stop me now, so I searched more the web until I found this useful manpage: getrlimit(2). There is a sample C code included in the manpage, in which we only need to replace RLIMIT_CPU with RLIMIT_NOFILE:
#define _GNU_SOURCE
#define _FILE_OFFSET_BITS 64
#include <stdio.h>
#include <time.h>
#include <stdlib.h>
#include <unistd.h>
#include <sys/resource.h>
#define errExit(msg) do   perror(msg); exit(EXIT_FAILURE); \
                          while (0)
int
main(int argc, char *argv[])
 
    struct rlimit old, new;
    struct rlimit *newp;
    pid_t pid;
    if (!(argc == 2   argc == 4))  
        fprintf(stderr, "Usage: %s <pid> [<new-soft-limit> "
                "<new-hard-limit>]\n", argv[0]);
        exit(EXIT_FAILURE);
     
    pid = atoi(argv[1]);        /* PID of target process */
    newp = NULL;
    if (argc == 4)  
        new.rlim_cur = atoi(argv[2]);
        new.rlim_max = atoi(argv[3]);
        newp = &new;
     
    /* Set CPU time limit of target process; retrieve and display
       previous limit */
    if (prlimit(pid, RLIMIT_NOFILE, newp, &old) == -1)
        errExit("prlimit-1");
    printf("Previous limits: soft=%lld; hard=%lld\n",
            (long long) old.rlim_cur, (long long) old.rlim_max);
    /* Retrieve and display new CPU time limit */
    if (prlimit(pid, RLIMIT_NOFILE, NULL, &old) == -1)
        errExit("prlimit-2");
    printf("New limits: soft=%lld; hard=%lld\n",
            (long long) old.rlim_cur, (long long) old.rlim_max);
    exit(EXIT_FAILURE);
 
And them compile it like this:
arturo@debian:~% gcc limits.c -o limits
We can then call this new binary like this:
arturo@debian:~% sudo limits $(pgrep slapd) 1000000 1000000
Finally, the limit seems OK:
arturo@debian:~% cat /proc/$(pgrep slapd)/limits
Limit                     Soft Limit           Hard Limit           Units
Max cpu time              unlimited            unlimited            seconds
Max file size             unlimited            unlimited            bytes
Max data size             unlimited            unlimited            bytes
Max stack size            8388608              unlimited            bytes
Max core file size        0                    unlimited            bytes
Max resident set          unlimited            unlimited            bytes
Max processes             16000                16000                processes
Max open files            1000000              1000000              files
Max locked memory         65536                65536                bytes
Max address space         unlimited            unlimited            bytes
Max file locks            unlimited            unlimited            locks
Max pending signals       16000                16000                signals
Max msgqueue size         819200               819200               bytes
Max nice priority         0                    0
Max realtime priority     0                    0
Max realtime timeout      unlimited            unlimited            us
Don t forget to apply this change every time the slapd daemon starts. Nobody found this issue before? really?

7 January 2017

Simon Richter: Crossgrading Debian in 2017

So, once again I had a box that had been installed with the kind-of-wrong Debian architecture, in this case, powerpc (32 bit, bigendian), while I wanted ppc64 (64 bit, bigendian). So, crossgrade time. If you want to follow this, be aware that I use sysvinit. I doubt this can be done this way with systemd installed, because systemd has a lot more dependencies for PID 1, and there is also a dbus daemon involved that cannot be upgraded without a reboot. To make this a bit more complicated, ppc64 is an unofficial port, so it is even less synchronized across architectures than sid normally is (I would have used jessie, but there is no jessie for ppc64). Step 1: Be Prepared To work around the archive synchronisation issues, I installed pbuilder and created 32 and 64 bit base.tgz archives:
pbuilder --create --basetgz /var/cache/pbuilder/powerpc.tgz
pbuilder --create --basetgz /var/cache/pbuilder/ppc64.tgz \
    --architecture ppc64 \
    --mirror http://ftp.ports.debian.org/debian-ports \
    --debootstrapopts --keyring=/usr/share/keyrings/debian-ports-archive-keyring.gpg \
    --debootstrapopts --include=debian-ports-archive-keyring
Step 2: Gradually Heat the Water so the Frog Doesn't Notice Then, I added the sources to sources.list, and added the architecture to dpkg:
deb [arch=powerpc] http://ftp.debian.org/debian sid main
deb [arch=ppc64] http://ftp.ports.debian.org/debian-ports sid main
deb-src http://ftp.debian.org/debian sid main
dpkg --add-architecture ppc64
apt update
Step 3: Time to Go Wild
apt install dpkg:ppc64
Obviously, that didn't work, in my case because libattr1 and libacl1 weren't in sync, so there was no valid way to install powerpc and ppc64 versions in parallel, so I used pbuilder to compile the current version from sid for the architecture that wasn't up to date (IIRC, one for powerpc, and one for ppc64). Manually installed the libraries, then tried again:
apt install dpkg:ppc64
Woo, it actually wants to do that. Now, that only half works, because apt calls dpkg twice, once to remove the old version, and once to install the new one. Your options at this point are
apt-get download dpkg:ppc64
dpkg -i dpkg_*_ppc64.deb
or if you didn't think far enough ahead, cursing followed by
cd /tmp
ar x /var/cache/apt/archives/dpkg_*_ppc64.deb
cd /
tar -xJf /tmp/data.tar.xz
dpkg -i /var/cache/apt/archives/dpkg_*_ppc64.deb
Step 4: Automate That Now, I'd like to get this a bit more convenient, so I had to repeat the same dance with apt and aptitude and their dependencies. Thanks to pbuilder, this wasn't too bad. With the aptitude resolver, it was then simple to upgrade a test package
aptitude install coreutils:ppc64 coreutils:powerpc-
The resolver did its thing, and asked whether I really wanted to remove an Essential package. I did, and it replaced the package just fine. So I asked dpkg for a list of all powerpc packages installed (since it's a ppc64 dpkg, it will report powerpc as foreign), massage that into shape with grep and sed, and give the result to aptitude as a command line. Some time later, aptitude finished, and I had a shiny 64 bit system. Crossgrade through an ssh session that remained open all the time, and without a reboot. After closing the ssh session, the last 32 bit binary was deleted as it was no longer in use. There were a few minor hiccups during the process where dpkg refused to overwrite "shared" files with different versions, but these could be solved easily by manually installing the offending package with
dpkg --force-overwrite -i ...
and then resuming what aptitude was doing, using
aptitude install
So, in summary, this still works fairly well.

20 October 2016

H ctor Or n Mart nez: Build a Debian package against Debian 8.0 using Download On Demand (DoD) service

In the previous post Open Build Service software architecture has been overviewed. In the current blog post, a tutorial on setting up a package build with OBS from Debian packages is presented. Steps: Generate a test environment by creating Stretch/SID VM Really, use whatever suits you best, but please create an untrusted test environment for this one. In the current tutorial it assumes $hostname is stretch , which should be stretch or sid suite. Be aware that copy & paste configuration files from current post might lead you into broken characters (i.e. ). Debian Stretch weekly netinst CD Enable experimental repository
# echo "deb http://httpredir.debian.org/debian experimental main" >> /etc/apt/sources.list.d/experimental.list
# apt-get update
Install and setup OBS server, api, worker and osc CLI packages
# apt-get install obs-server obs-api obs-worker osc
In the install process mysql database is needed, therefore if mysql server is not setup, a password needs to be provided.
When OBS API database obs-api is created, we need to pick a password for it, provide opensuse . The obs-api package will configure apache2 https webserver (creating a dummy certificate for stretch ) to serve OBS webui.
Add stretch and obs aliases to localhost entry in your /etc/hosts file.
Enable worker by setting ENABLED=1 in /etc/default/obsworker
Try to connect to the web UI https://stretch/
Login into OBS webui, default login credentials: Admin/opensuse).
From command line tool, try to list projects in OBS
 $ osc -A https://stretch ls
Accept dummy certificate and provide credentials (defaults: Admin/opensuse)
If the install proceeds as expected follow to the next step. Ensure all OBS services are running
# backend services
obsrun     813  0.0  0.9 104960 20448 ?        Ss   08:33   0:03 /usr/bin/perl -w /usr/lib/obs/server/bs_dodup
obsrun     815  0.0  1.5 157512 31940 ?        Ss   08:33   0:07 /usr/bin/perl -w /usr/lib/obs/server/bs_repserver
obsrun    1295  0.0  1.6 157644 32960 ?        S    08:34   0:07  \_ /usr/bin/perl -w /usr/lib/obs/server/bs_repserver
obsrun     816  0.0  1.8 167972 38600 ?        Ss   08:33   0:08 /usr/bin/perl -w /usr/lib/obs/server/bs_srcserver
obsrun    1296  0.0  1.8 168100 38864 ?        S    08:34   0:09  \_ /usr/bin/perl -w /usr/lib/obs/server/bs_srcserver
memcache   817  0.0  0.6 346964 12872 ?        Ssl  08:33   0:11 /usr/bin/memcached -m 64 -p 11211 -u memcache -l 127.0.0.1
obsrun     818  0.1  0.5  78548 11884 ?        Ss   08:33   0:41 /usr/bin/perl -w /usr/lib/obs/server/bs_dispatch
obsserv+   819  0.0  0.3  77516  7196 ?        Ss   08:33   0:05 /usr/bin/perl -w /usr/lib/obs/server/bs_service
mysql      851  0.0  0.0   4284  1324 ?        Ss   08:33   0:00 /bin/sh /usr/bin/mysqld_safe
mysql     1239  0.2  6.3 1010744 130104 ?      Sl   08:33   1:31  \_ /usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/usr/lib/mysql/plugin --log-error=/var/log/mysql/error.log --pid-file=/var/run/mysqld/mysqld.pid --socket=/var/run/mysqld/mysqld.sock --port=3306
# web services
root      1452  0.0  0.1 110020  3968 ?        Ss   08:34   0:01 /usr/sbin/apache2 -k start
root      1454  0.0  0.1 435992  3496 ?        Ssl  08:34   0:00  \_ Passenger watchdog
root      1460  0.3  0.2 651044  5188 ?        Sl   08:34   1:46      \_ Passenger core
nobody    1465  0.0  0.1 444572  3312 ?        Sl   08:34   0:00      \_ Passenger ust-router
www-data  1476  0.0  0.1 855892  2608 ?        Sl   08:34   0:09  \_ /usr/sbin/apache2 -k start
www-data  1477  0.0  0.1 856068  2880 ?        Sl   08:34   0:09  \_ /usr/sbin/apache2 -k start
www-data  1761  0.0  4.9 426868 102040 ?       Sl   08:34   0:29 delayed_job.0
www-data  1767  0.0  4.8 425624 99888 ?        Sl   08:34   0:30 delayed_job.1
www-data  1775  0.0  4.9 426516 101708 ?       Sl   08:34   0:28 delayed_job.2
nobody    1788  0.0  5.7 496092 117480 ?       Sl   08:34   0:03 Passenger RubyApp: /usr/share/obs/api
nobody    1796  0.0  4.9 488888 102176 ?       Sl   08:34   0:00 Passenger RubyApp: /usr/share/obs/api
www-data  1814  0.0  4.5 282576 92376 ?        Sl   08:34   0:22 delayed_job.1000
www-data  1829  0.0  4.4 282684 92228 ?        Sl   08:34   0:22 delayed_job.1010
www-data  1841  0.0  4.5 282932 92536 ?        Sl   08:34   0:22 delayed_job.1020
www-data  1855  0.0  4.9 427988 101492 ?       Sl   08:34   0:29 delayed_job.1030
www-data  1865  0.2  5.0 492500 102964 ?       Sl   08:34   1:09 clockworkd.clock
www-data  1899  0.0  0.0  87100  1400 ?        S    08:34   0:00 /usr/bin/searchd --pidfile --config /usr/share/obs/api/config/production.sphinx.conf
www-data  1900  0.1  0.4 161620  8276 ?        Sl   08:34   0:51  \_ /usr/bin/searchd --pidfile --config /usr/share/obs/api/config/production.sphinx.conf
# OBS worker
root      1604  0.0  0.0  28116  1492 ?        Ss   08:34   0:00 SCREEN -m -d -c /srv/obs/run/worker/boot/screenrc
root      1605  0.0  0.9  75424 18764 pts/0    Ss+  08:34   0:06  \_ /usr/bin/perl -w ./bs_worker --hardstatus --root /srv/obs/worker/root_1 --statedir /srv/obs/run/worker/1 --id stretch:1 --reposerver http://obs:5252 --jobs 1
Create an OBS project for Download on Demand (DoD) Create a meta project file:
$ osc -A https://stretch:443 meta prj Debian:8 -e
<project name= Debian:8 >
<title>Debian 8 DoD</title>
<description>Debian 8 DoD</description>
<person userid= Admin role= maintainer />
<repository name= main >
<download arch= x86_64 url= http://deb.debian.org/debian/jessie/main repotype= deb />
<arch>x86_64</arch>
</repository>
</project>
Visit webUI to check project configuration Create a meta project configuration file:
$ osc -A https://stretch:443 meta prjconf Debian:8 -e
Add the following file, as found at build.opensuse.org
Repotype: debian
# create initial user
Preinstall: base-passwd
Preinstall: user-setup
# required for preinstall images
Preinstall: perl
# preinstall essentials + dependencies
Preinstall: base-files base-passwd bash bsdutils coreutils dash debconf
Preinstall: debianutils diffutils dpkg e2fslibs e2fsprogs findutils gawk
Preinstall: gcc-4.9-base grep gzip hostname initscripts insserv libacl1
Preinstall: libattr1 libblkid1 libbz2-1.0 libc-bin libc6 libcomerr2 libdb5.3
Preinstall: libgcc1 liblzma5 libmount1 libncurses5 libpam-modules
Preinstall: libpcre3 libsmartcols1
Preinstall: libpam-modules-bin libpam-runtime libpam0g libreadline6
Preinstall: libselinux1 libsemanage-common libsemanage1 libsepol1 libsigsegv2
Preinstall: libslang2 libss2 libtinfo5 libustr-1.0-1 libuuid1 login lsb-base
Preinstall: mount multiarch-support ncurses-base ncurses-bin passwd perl-base
Preinstall: readline-common sed sensible-utils sysv-rc sysvinit sysvinit-utils
Preinstall: tar tzdata util-linux zlib1g
Runscripts: base-passwd user-setup base-files gawk
VMinstall: libdevmapper1.02.1
Order: user-setup:base-files
# Essential packages (this should also pull the dependencies)
Support: base-files base-passwd bash bsdutils coreutils dash debianutils
Support: diffutils dpkg e2fsprogs findutils grep gzip hostname libc-bin 
Support: login mount ncurses-base ncurses-bin perl-base sed sysvinit 
Support: sysvinit-utils tar util-linux
# Build-essentials
Required: build-essential
Prefer: build-essential:make
# build script needs fakeroot
Support: fakeroot
# lintian support would be nice, but breaks too much atm
#Support: lintian
# helper tools in the chroot
Support: less kmod net-tools procps psmisc strace vim
# everything below same as for Debian:6.0 (apart from the version macros ofc)
# circular dependendencies in openjdk stack
Order: openjdk-6-jre-lib:openjdk-6-jre-headless
Order: openjdk-6-jre-headless:ca-certificates-java
Keep: binutils cpp cracklib file findutils gawk gcc gcc-ada gcc-c++
Keep: gzip libada libstdc++ libunwind
Keep: libunwind-devel libzio make mktemp pam-devel pam-modules
Keep: patch perl rcs timezone
Prefer: cvs libesd0 libfam0 libfam-dev expect
Prefer: gawk locales default-jdk
Prefer: xorg-x11-libs libpng fam mozilla mozilla-nss xorg-x11-Mesa
Prefer: unixODBC libsoup glitz java-1_4_2-sun gnome-panel
Prefer: desktop-data-SuSE gnome2-SuSE mono-nunit gecko-sharp2
Prefer: apache2-prefork openmotif-libs ghostscript-mini gtk-sharp
Prefer: glib-sharp libzypp-zmd-backend mDNSResponder
Prefer: -libgcc-mainline -libstdc++-mainline -gcc-mainline-c++
Prefer: -libgcj-mainline -viewperf -compat -compat-openssl097g
Prefer: -zmd -OpenOffice_org -pam-laus -libgcc-tree-ssa -busybox-links
Prefer: -crossover-office -libgnutls11-dev
# alternative pkg-config implementation
Prefer: -pkgconf
Prefer: -openrc
Prefer: -file-rc
Conflict: ghostscript-library:ghostscript-mini
Ignore: sysvinit:initscripts
Ignore: aaa_base:aaa_skel,suse-release,logrotate,ash,mingetty,distribution-release
Ignore: gettext-devel:libgcj,libstdc++-devel
Ignore: pwdutils:openslp
Ignore: pam-modules:resmgr
Ignore: rpm:suse-build-key,build-key
Ignore: bind-utils:bind-libs
Ignore: alsa:dialog,pciutils
Ignore: portmap:syslogd
Ignore: fontconfig:freetype2
Ignore: fontconfig-devel:freetype2-devel
Ignore: xorg-x11-libs:freetype2
Ignore: xorg-x11:x11-tools,resmgr,xkeyboard-config,xorg-x11-Mesa,libusb,freetype2,libjpeg,libpng
Ignore: apache2:logrotate
Ignore: arts:alsa,audiofile,resmgr,libogg,libvorbis
Ignore: kdelibs3:alsa,arts,pcre,OpenEXR,aspell,cups-libs,mDNSResponder,krb5,libjasper
Ignore: kdelibs3-devel:libvorbis-devel
Ignore: kdebase3:kdebase3-ksysguardd,OpenEXR,dbus-1,dbus-1-qt,hal,powersave,openslp,libusb
Ignore: kdebase3-SuSE:release-notes
Ignore: jack:alsa,libsndfile
Ignore: libxml2-devel:readline-devel
Ignore: gnome-vfs2:gnome-mime-data,desktop-file-utils,cdparanoia,dbus-1,dbus-1-glib,krb5,hal,libsmbclient,fam,file_alteration
Ignore: libgda:file_alteration
Ignore: gnutls:lzo,libopencdk
Ignore: gnutls-devel:lzo-devel,libopencdk-devel
Ignore: pango:cairo,glitz,libpixman,libpng
Ignore: pango-devel:cairo-devel
Ignore: cairo-devel:libpixman-devel
Ignore: libgnomeprint:libgnomecups
Ignore: libgnomeprintui:libgnomecups
Ignore: orbit2:libidl
Ignore: orbit2-devel:libidl,libidl-devel,indent
Ignore: qt3:libmng
Ignore: qt-sql:qt_database_plugin
Ignore: gtk2:libpng,libtiff
Ignore: libgnomecanvas-devel:glib-devel
Ignore: libgnomeui:gnome-icon-theme,shared-mime-info
Ignore: scrollkeeper:docbook_4,sgml-skel
Ignore: gnome-desktop:libgnomesu,startup-notification
Ignore: python-devel:python-tk
Ignore: gnome-pilot:gnome-panel
Ignore: gnome-panel:control-center2
Ignore: gnome-menus:kdebase3
Ignore: gnome-main-menu:rug
Ignore: libbonoboui:gnome-desktop
Ignore: postfix:pcre
Ignore: docbook_4:iso_ent,sgml-skel,xmlcharent
Ignore: control-center2:nautilus,evolution-data-server,gnome-menus,gstreamer-plugins,gstreamer,metacity,mozilla-nspr,mozilla,libxklavier,gnome-desktop,startup-notification
Ignore: docbook-xsl-stylesheets:xmlcharent
Ignore: liby2util-devel:libstdc++-devel,openssl-devel
Ignore: yast2:yast2-ncurses,yast2-theme-SuSELinux,perl-Config-Crontab,yast2-xml,SuSEfirewall2
Ignore: yast2-core:netcat,hwinfo,wireless-tools,sysfsutils
Ignore: yast2-core-devel:libxcrypt-devel,hwinfo-devel,blocxx-devel,sysfsutils,libstdc++-devel
Ignore: yast2-packagemanager-devel:rpm-devel,curl-devel,openssl-devel
Ignore: yast2-devtools:perl-XML-Writer,libxslt,pkgconfig
Ignore: yast2-installation:yast2-update,yast2-mouse,yast2-country,yast2-bootloader,yast2-packager,yast2-network,yast2-online-update,yast2-users,release-notes,autoyast2-installation
Ignore: yast2-bootloader:bootloader-theme
Ignore: yast2-packager:yast2-x11
Ignore: yast2-x11:sax2-libsax-perl
Ignore: openslp-devel:openssl-devel
Ignore: java-1_4_2-sun:xorg-x11-libs
Ignore: java-1_4_2-sun-devel:xorg-x11-libs
Ignore: kernel-um:xorg-x11-libs
Ignore: tetex:xorg-x11-libs,expat,fontconfig,freetype2,libjpeg,libpng,ghostscript-x11,xaw3d,gd,dialog,ed
Ignore: yast2-country:yast2-trans-stats
Ignore: susehelp:susehelp_lang,suse_help_viewer
Ignore: mailx:smtp_daemon
Ignore: cron:smtp_daemon
Ignore: hotplug:syslog
Ignore: pcmcia:syslog
Ignore: avalon-logkit:servlet
Ignore: jython:servlet
Ignore: ispell:ispell_dictionary,ispell_english_dictionary
Ignore: aspell:aspel_dictionary,aspell_dictionary
Ignore: smartlink-softmodem:kernel,kernel-nongpl
Ignore: OpenOffice_org-de:myspell-german-dictionary
Ignore: mediawiki:php-session,php-gettext,php-zlib,php-mysql,mod_php_any
Ignore: squirrelmail:mod_php_any,php-session,php-gettext,php-iconv,php-mbstring,php-openssl
Ignore: simias:mono(log4net)
Ignore: zmd:mono(log4net)
Ignore: horde:mod_php_any,php-gettext,php-mcrypt,php-imap,php-pear-log,php-pear,php-session,php
Ignore: xerces-j2:xml-commons-apis,xml-commons-resolver
Ignore: xdg-menu:desktop-data
Ignore: nessus-libraries:nessus-core
Ignore: evolution:yelp
Ignore: mono-tools:mono(gconf-sharp),mono(glade-sharp),mono(gnome-sharp),mono(gtkhtml-sharp),mono(atk-sharp),mono(gdk-sharp),mono(glib-sharp),mono(gtk-sharp),mono(pango-sharp)
Ignore: gecko-sharp2:mono(glib-sharp),mono(gtk-sharp)
Ignore: vcdimager:libcdio.so.6,libcdio.so.6(CDIO_6),libiso9660.so.4,libiso9660.so.4(ISO9660_4)
Ignore: libcdio:libcddb.so.2
Ignore: gnome-libs:libgnomeui
Ignore: nautilus:gnome-themes
Ignore: gnome-panel:gnome-themes
Ignore: gnome-panel:tomboy
Substitute: utempter
%ifnarch s390 s390x ppc ia64
Substitute: java2-devel-packages java-1_4_2-sun-devel
%else
 %ifnarch s390x
Substitute: java2-devel-packages java-1_4_2-ibm-devel
 %else
Substitute: java2-devel-packages java-1_4_2-ibm-devel xorg-x11-libs-32bit
 %endif
%endif
Substitute: yast2-devel-packages docbook-xsl-stylesheets doxygen libxslt perl-XML-Writer popt-devel sgml-skel update-desktop-files yast2 yast2-devtools yast2-packagemanager-devel yast2-perl-bindings yast2-testsuite
#
# SUSE compat mappings
#
Substitute: gcc-c++ gcc
Substitute: libsigc++2-devel libsigc++-2.0-dev
Substitute: glibc-devel-32bit
Substitute: pkgconfig pkg-config
%ifarch %ix86
Substitute: kernel-binary-packages kernel-default kernel-smp kernel-bigsmp kernel-debug kernel-um kernel-xen kernel-kdump
%endif
%ifarch ia64
Substitute: kernel-binary-packages kernel-default kernel-debug
%endif
%ifarch x86_64
Substitute: kernel-binary-packages kernel-default kernel-smp kernel-xen kernel-kdump
%endif
%ifarch ppc
Substitute: kernel-binary-packages kernel-default kernel-kdump kernel-ppc64 kernel-iseries64
%endif
%ifarch ppc64
Substitute: kernel-binary-packages kernel-ppc64 kernel-iseries64
%endif
%ifarch s390
Substitute: kernel-binary-packages kernel-s390
%endif
%ifarch s390x
Substitute: kernel-binary-packages kernel-default
%endif
%define debian_version 800
Macros:
%debian_version 800
Visit webUI to check project configuration Create an OBS project linked to DoD
$ osc -A https://stretch:443 meta prj test -e
<project name= test >
<title>test</title>
<description>test</description>
<person userid= Admin role= maintainer />
<repository name= Debian_8.0 >
<path project= Debian:8 repository= main />
<arch>x86_64</arch>
</repository>
</project>
Visit webUI to check project configuration Adding a package to the project
$ osc -A https://stretch:443 co test ; cd test
$ mkdir hello ; cd hello ; apt-get source -d hello ; cd - ; 
$ osc add hello 
$ osc ci -m "New import" hello
The package should go to dispatched state then get in blocked state while it downloads build dependencies from DoD link, eventually it should start building. Please check the journal logs to check if something went wrong or gets stuck. Visit webUI to check hello package build state OBS logging to the journal Check in the journal logs everything went fine:
$ sudo journalctl -u obsdispatcher.service -u obsdodup.service -u obsscheduler@x86_64.service -u obsworker.service -u obspublisher.service
Troubleshooting Currently we are facing few issues with web UI: And there are more issues that have not been reported, please do reportbug obs-api .

30 September 2016

Chris Lamb: Free software activities in September 2016

Here is my monthly update covering what I have been doing in the free software world (previous month):
Reproducible builds

Whilst anyone can inspect the source code of free software for malicious flaws, most Linux distributions provide binary (or "compiled") packages to end users. The motivation behind the Reproducible Builds effort is to allow verification that no flaws have been introduced either maliciously and accidentally during this compilation process by promising identical binary packages are always generated from a given source. My work in the Reproducible Builds project was also covered in our weekly reports #71, #72, #71 & #74. I made the following improvements to our tools:

diffoscope

diffoscope is our "diff on steroids" that will not only recursively unpack archives but will transform binary formats into human-readable forms in order to compare them.

  • Added a global Progress object to track the status of the comparison process allowing for graphical and machine-readable status indicators. I also blogged about this feature in more detail.
  • Moved the global Config object to a more Pythonic "singleton" pattern and ensured that constraints are checked on every change.

disorderfs

disorderfs is our FUSE filesystem that deliberately introduces nondeterminism into the results of system calls such as readdir(3).

  • Display the "disordered" behaviour we intend to show on startup. (#837689)
  • Support relative paths in command-line parameters (previously only absolute paths were permitted).

strip-nondeterminism

strip-nondeterminism is our tool to remove specific information from a completed build.

  • Fix an issue where temporary files were being left on the filesystem and add a test to avoid similar issues in future. (#836670)
  • Print an error if the file to normalise does not exist. (#800159)
  • Testsuite improvements:
    • Set the timezone in tests to avoid a FTBFS and add a File::StripNondeterminism::init method to the API to to set tzset everywhere. (#837382)
    • "Smoke test" the strip-nondeterminism(1) and dh_strip_nondeterminism(1) scripts to prevent syntax regressions.
    • Add a testcase for .jar file ordering and normalisation.
    • Check the stripping process before comparing file attributes to make it less confusing on failure.
    • Move to a lookup table for descriptions of stat(1) indices and use that for nicer failure messages.
    • Don't uselessly test whether the inode number has changed.
  • Run perlcritic across the codebase and adopt some of its prescriptions including explicitly using oct(..) for integers with leading zeroes, avoiding mixing high and low-precedence booleans, ensuring subroutines end with a return statement, etc.

I also submitted 4 patches to fix specific reproducibility issues in golang-google-grpc, nostalgy, python-xlib & torque.


Debian https://lamby-www.s3.amazonaws.com/yadt/blog.Image/image/original/28.jpeg

Patches contributed

Debian LTS

This month I have been paid to work 12.75 hours on Debian Long Term Support (LTS). In that time I did the following:
  • "Frontdesk" duties, triaging CVEs, etc.
  • Issued DLA 608-1 for mailman fixing a CSRF vulnerability.
  • Issued DLA 611-1 for jsch correcting a path traversal vulnerability.
  • Issued DLA 620-1 for libphp-adodb patching a SQL injection vulnerability.
  • Issued DLA 631-1 for unadf correcting a buffer underflow issue.
  • Issued DLA 634-1 for dropbear fixing a buffer overflow when parsing ASN.1 keys.
  • Issued DLA 635-1 for dwarfutils working around an out-of-bounds read issue.
  • Issued DLA 638-1 for the SELinux policycoreutils, patching a sandbox escape issue.
  • Enhanced Brian May's find-work --unassigned switch to take an optional "except this user" argument.
  • Marked matrixssl and inspircd as being unsupported in the current LTS version.

Uploads
  • python-django 1:1.10.1-1 New upstream release and ensure that django-admin startproject foo creates files with the correct shebang under Python 3.
  • gunicorn:
    • 19.6.0-5 Don't call chown(2) if it would be a no-op to avoid failure under snap.
    • 19.6.0-6 Remove now-obsolete conffiles and logrotate scripts; they should have been removed in 19.6.0-3.
  • redis:
    • 3.2.3-2 Call ulimit -n 65536 by default from SysVinit scripts to normalise the behaviour with systemd. I also bumped the Debian package epoch as the "2:" prefix made it look like we are shipping version 2.x. I additionaly backported this upload to Debian Jessie.
    • 3.2.4-1 New upstream release, add missing -ldl for dladdr(3) & add missing dependency on lsb-base.
  • python-redis (2.10.5-2) Bump python-hiredis to Suggests to sync with Ubuntu and move to a machine-readable debian/copyright. I also backported this upload to Debian Jessie.
  • adminer (4.2.5-3) Move mysql-server dependencies to default-mysql-server. I also backported this upload to Debian Jessie.
  • gpsmanshp (1.2.3-5) on behalf of the QA team:
    • Move to "minimal" debhelper style, making the build reproducible. (#777446 & #792991)
    • Reorder linker command options to build with --as-needed (#729726) and add hardening flags.
    • Move to machine-readable copyright file, add missing #DEBHELPER# tokens to postinst and prerm scripts, tidy descriptions & other debian/control fields and other smaller changes.

I sponsored the upload of 5 packages from other developers:

I also NMU'd:



FTP Team

As a Debian FTP assistant I ACCEPTed 147 packages: alljoyn-services-1604, android-platform-external-doclava, android-platform-system-tools-aidl, aufs, bcolz, binwalk, bmusb, bruteforce-salted-openssl, cappuccino, captagent, chrome-gnome-shell, ciphersaber, cmark, colorfultabs, cppformat, dnsrecon, dogtag-pki, dxtool, e2guardian, flask-compress, fonts-mononoki, fwknop-gui, gajim-httpupload, glbinding, glewmx, gnome-2048, golang-github-googleapis-proto-client-go, google-android-installers, gsl, haskell-hmatrix-gsl, haskell-relational-query, haskell-relational-schemas, haskell-secret-sharing, hindsight, i8c, ip4r, java-string-similarity, khal, khronos-opencl-headers, liblivemedia, libshell-config-generate-perl, libshell-guess-perl, libstaroffice, libxml2, libzonemaster-perl, linux, linux-grsec-base, linux-signed, lua-sandbox, lua-torch-trepl, mbrola-br2, mbrola-br4, mbrola-de1, mbrola-de2, mbrola-de3, mbrola-ir1, mbrola-lt1, mbrola-lt2, mbrola-mx1, mimeo, mimerender, mongo-tools, mozilla-gnome-keyring, munin, node-grunt-cli, node-js-yaml, nova, open-build-service, openzwave, orafce, osmalchemy, pgespresso, pgextwlist, pgfincore, pgmemcache, pgpool2, pgsql-asn1oid, postbooks-schema, postgis, postgresql-debversion, postgresql-multicorn, postgresql-mysql-fdw, postgresql-unit, powerline-taskwarrior, prefix, pycares, pydl, pynliner, pytango, pytest-cookies, python-adal, python-applicationinsights, python-async-timeout, python-azure, python-azure-storage, python-blosc, python-can, python-canmatrix, python-chartkick, python-confluent-kafka, python-jellyfish, python-k8sclient, python-msrestazure, python-nss, python-pytest-benchmark, python-tenacity, python-tmdbsimple, python-typing, python-unidiff, python-xstatic-angular-schema-form, python-xstatic-tv4, quilt, r-bioc-phyloseq, r-cran-filehash, r-cran-png, r-cran-testit, r-cran-tikzdevice, rainbow-mode, repmgr, restart-emacs, restbed, ruby-azure-sdk, ruby-babel-source, ruby-babel-transpiler, ruby-diaspora-prosody-config, ruby-haikunator, ruby-license-finder, ruby-ms-rest, ruby-ms-rest-azure, ruby-rails-assets-autosize, ruby-rails-assets-blueimp-gallery, ruby-rails-assets-bootstrap, ruby-rails-assets-bootstrap-markdown, ruby-rails-assets-emojione, ruby-sprockets-es6, ruby-timeliness, rustc, skytools3, slony1-2, snmp-mibs-downloader, syslog-ng, test-kitchen, uctodata, usbguard, vagrant-azure, vagrant-mutate & vim.

27 August 2016

Arturo Borrero Gonz lez: Why conntrackd in Debian is better with systemd


There has been some discussion [0] around my decision to drop sysvinit support in the conntrackd package in Debian [1] (version 1:1.4.4-2).

The rationale I used for such a movement was sent to the debian-devel [2] mailing list, and here it is:


Lots of people have talked to me with both support and disagreement with the change.

Before reading the rest of this blogpost, please note that I'm not interested in the 'systemd vs sysvinit' war, and starting now I will focus mainly on the subject of building a firewall cluster with netfilter technology and the reasons why I think sysvinit here is irrelevant.

I started working with firewall clusters by the time of Debian Squeeze. By then, I used only sysvinit, because it was the Debian default init system and because I did not dive so much in the internals of the firewall cluster itself.

By then, the conntrackd debian package included support for sysvinit by means of two files (the two files that I dropped in 1:1.4.4-2) :


Using both files, you could start/stop conntrackd, and nothing more (for example, a proper status check was not implemented).

The conntrackd daemon is used in HA firewall clusters to replicate connection states between nodes of the cluster, so flow states are known in all nodes and they can properly perform stateful firewalling.

When you build HA clusters, there are 2 basic states of the cluster you may check and adjust: failover and failback.

With conntrackd, the failover situation is straight forward: the other node has the state information already from the dying node.
The failback situation is different: depending on the configuration of both the cluster and conntrackd, the new node may ask for a complete synchronisation to the other node when it become alive again (at boot time, for example) . This is the case if you are building a multi-master firewall cluster (i.e: both nodes are seeing traffic and filtering at the same time).

Asking for a complete synchronisation is done by means of a new instance of conntrackd, which communicates using a UNIX socket to the main conntrackd daemon (the one which is actually communicating with the other node). A failback boot procedure may look like this:

  1. system boots
  2. firewall ruleset loading
  3. networking up
  4. conntrackd up (main sync daemon: using 'conntrackd -d'; UNIX socket is opened)
  5. request sync with other node (using conntrackd -n' which uses the UNIX socket opened in step 4)

In both sysvinit and systemd cases, the step 5 may be implemented using another dummy service which performs the required operations and that depends on the main service representing the step 4.

The point here is that steps 4 and 5 suffer a very bad race condition: the conntrackd daemon may open the UNIX socket (step 4) *after* the we try to use it (step 5).

As you could probably imagine, this is the worst possible scenario: as one of the nodes of the cluster is missing important flow state information the stateful firewall will drop packets and all the network monitoring alarms will start to ring... Ironically, just after a failback operation, when all of our cluster is supposed to be up an running.

To avoid the race conditions, some typical hacks could be implemented:


After a bit of research, I found that the last point could be implemented very easily using libsystemd, so the daemon inform of its internal state to systemd [4].

The solution is elegant, simple and offers more direct benefits:

So the clear winner for conntrackd is to go with systemd, using an unit service file of Type=notify. Using sysvinit is prone to the described race condition. No serious firewall cluster with this architecture would use sysvinit.

It's obvious to me that systemd is a better technology than sysvinit. The sysvinit approach has been OK for a while, and for me it was fascinating when I started developing init scripts and learning how things worked.
But now we have systemd. It's the default in Debian. It's far better.

Conclusions: yes, I will probably reintroduce sysvinit files just to avoid any flamewar.


[0] https://lists.debian.org/debian-devel/2016/08/msg00448.html
[1] https://tracker.debian.org/pkg/conntrack-tools
[2] https://lists.debian.org/debian-devel/2016/08/msg00456.html
[3] https://sources.debian.net/src/systemd/231-4/debian/systemd.NEWS/
[4] http://git.netfilter.org/conntrack-tools/tree/src/systemd.c

15 June 2016

Andrew Shadura: Migrate to systemd without a reboot

Yesterday I was fixing an issue with one of the servers behind kallithea-scm.org: the hook intended to propagage pushes from Our Own Kallithea to Bitbucket stopped working. Until yesterday, that server was using Debian s flavour of System V init and djb s d montools to keep things running. To make the hook asynchronous, I wrote a service to be managed to d montools, so that concurrency issued would be solved by it. However, I didn t implement any timeouts, so when last week wget froze while pulling Weblate s hook, there was nothing to interrupt it, so the hook stopped working since d montools thought it s already running and wouldn t re-trigger it. Killing wget helped, but I decided I need to do something with it to prevent the situation from happening in the future. I ve been using systemd at work for the last year, so I am now confident I m happier with systemd than with d montools, so I decided to switch the server to systemd. Not surprisingly, I prepared unit files in about 5 minutes without having to look into the manuals again, while with d montools I had to check things every time I needed to change something. The tricky thing was the switch itself. It is a virtual server, presumably running in Xen, and I don t have access to the console, so if I bork break something, I need to summon Bradley Kuhn or someone from Conservancy, who s kindly donated the server to the project. In any case, I decided to attempt to upgrade without a reboot, so that I have more options to roll back my changes in the case things go wrong. After studying the manpages of both systemd s init and sysvinit s init, I realised I can install systemd as /sbin/init and ask already running System V init to re-exec. However, systemd s init can t talk to System V init, so before installing systemd I made a backup on it. It s also important to stop all running services (except probably ssh) to make sure systemd doesn t start second instances of each. And then: /tmp/init u and we re running systemd! A couple of additional checks, and it s safe to reboot. Only when I did all that I realised that in the case of systemd not working I d probably not be able to undo my changes if my connection interrupted. So, even though at the end it worked, probably it s not a good idea to perform such manipulations when you don t have an alternative way to connect to the server :)

1 May 2016

Ben Hutchings: 10 years as a Debian Developer

On 1st May 2006 my Debian account was created and I gained the status of Debian Developer. At that time I had already been to several BSPs and one DebConf, and maintained a few applications and Perl library packages. We were working toward the etch release and would soon hold DebConf 6 in Mexico. Ten years later, I still maintain one of those packages (sgt-puzzles) but the rest were either handed over to the Perl team or entirely removed. I wrote, maintained, and then gave away dvswitch all within this period. I have packaged some other applications that I needed to use - kup, ministat, odhcp6c - and I continue to maintain them. I have also made many NMUs, including security uploads, for all kinds of packages including bind9, e2fsprogs, (e)glibc, lvm2, sudo, sysvinit and udev. However, for about the past 7 years most of my work in Debian has been done within the kernel team, working on the Linux kernel and closely related packages - such as crda, ethtool, firmware-nonfree and initramfs-tools. I have also become an upstream developer for several of these projects. I'm proud to have played a part in the etch, lenny, squeeze, wheezy and jessie releases, and I have enjoyed attending 7 more DebConfs and many mini-DebConfs. I'm now looking forward to another great release (stretch) and to attending DebConf 16 in Cape Town this summer winter. I hope to still be active in Debian in 2026, looking back on another 10 years in this amazing project.

24 April 2016

Dominique Dumont: Automount usb devices with systemd

Hello Ever since udisk-glue was obsoleted with udisk (the first generation), I ve been struggling to find a solution to automatically mount a usb drive when such a device is connected to a kodi based home cinema PC. I wanted to avoid writing dedicated scripts or udev rules. Systemd is quite powerful and I thought that a simple solution should be possible using systemd configuration. Actually, auto-mount notion covers 2 scenario :
  1. A device is mounted after being plugged in
  2. An already available device is mounted when a process accesses its mount point
The first case is the one that is needed with Kodi. The second may be usefull so is also documented in this post. For the first case, add a line like the following in /etc/fstab:
/dev/sr0 /mnt/br auto defaults,noatime,auto,nofail 0 2
and reload systemd configuration:
sudo systemctl daemon-reload
The important parameters are auto and nofail : with auto , systemd mounts the filesystem as soon as the device is available. This behavior is different from sysvinit where auto is taken into account only when mount -a is run by init scripts. nofail ensures that boot does not fail when the device is not available. The second case is handled by a line like the following one (even if the line is split here to improve readability):
/dev/sr0 /mnt/br auto defaults,x-systemd.automount,\
   x-systemd.device-timeout=5,noatime,noauto 0 2
With the line above in /etc/fstab, the file system is mounted when user does (for instance) ls /mnt/br (actually, the first ls fails and triggers the mount. A second ls gives the expected result. There s probably a way to improve this behavior, but I ve not found it ) x-systemd.* parameters are documented in systemd.mount(5). Last but not least, using a plain device file (like /dev/sr0) works fine to automount optical devices. But it is difficult to predict the name of a device file created for a usb drive, so a LABEL or a UUID should be used in /etc/fstab instead of a plain device file. I.e. something like:
LABEL=my_usb_drive /mnt/my-drive auto defaults,auto,nofail 0 2
All the best
Tagged: kodi, systemd

21 April 2016

John Goerzen: Count me as a systemd convert

Back in 2014, I wrote about some negative first impressions of systemd. I also had a plea to debian-project to end all the flaming, pointing out that jessie will still boot , noting that my preference was for sysvinit but things are what they are and it wasn t that big of a deal. Although I still have serious misgivings about the systemd upstream s attitude, I ve got to say I find the system rather refreshing and useful in practice. Here s an example. I was debugging the boot on a server recently. It mounts a bunch of NFS filesystems and runs a third-party daemon that is started from an old-style /etc/init.d script. We had a situation where the NFS filesystems the daemon required didn t mount on boot. The daemon then was started, and unfortunately it basically does a mkdir -p on startup. So it started running and processing requests with negative results. So there were two questions: why did the NFS filesystems fail to start, and how could we make sure the daemon wouldn t start without them mounted? For the first, journalctl -xb was immensely helpful. It logged the status of each individual mount, and it turned out that it looked like a modprobe or kernel race condition when a bunch of NFS mounts were kicked off in parallel and all tried to load the nfsv4 module at the same time. That was easy enough to work around by adding nfsv4 to /etc/modules. Now for the other question: refusing to start the daemon if the filesystems weren t there. With systemd, this was actually trivial. I created /etc/systemd/system/mydaemon.service.requires (I ll call the service mydaemon here), and in it I created a symlink to /lib/systemd/system/remote-fs.target. Then systemctl daemon-reload, and boom, done. systemctl list-dependencies mydaemon will even show the the dependency tree, color-coded status of each item on it, and will actually show every single filesystem that remote-fs requires and the status of it in one command. Super handy. In a non-systemd environment, I d probably be modifying the init script and doing a bunch of manual scripting to check the filesystems. Here, one symlink and one command did it, and I get tools to inspect the status of the mydaemon prerequisites for free. I ve got to say, as someone that has occasionally had to troubleshoot boot ordering and update-rc.d symlink hell, troubleshooting this stuff in systemd is considerably easier and the toolset is more powerful. Yes, it has its set of poorly-documented complexity, but then so did sysvinit. I never thought the world is falling folks were right, but by now I can be counted among those that feels like systemd has matured to the point where it truly is superior to sysvinit. Yes, in 2014 it had some bugs, but by here in 2016 it looks pretty darn good and I feel like Debian s decision has been validated through my actual experience with it.

3 March 2016

Simon Richter: Why sysvinit?

As most distributions have switched to systemd as default "init", some people have asked why we actually keep sysvinit around, as it's old and crusty, and systemd can do so much more. My answer is: systemd and sysvinit solve entirely different problems, and neither can ever replace the other fully. The big argument in favour of systemd is integration: when you keep everything in one process, it is easy to make sure that components talk to each other. That is nice, but kind of misses the point: integration could be done via IPC as well, and systemd uses a lot of it to separate privileges anyway. The tighter integration comes from something else: the configuration format. Where sysvinit uses a lot of shell scripts ("imperative" format), systemd uses explicit keywords ("descriptive" format), and this is the important design decision here: descriptive formats cannot do anything unanticipated, neither good nor bad. This is important for UI programmers: if there is a button that says "if I close the lid switch, suspend the laptop", then this needs to be connected to functionality that behaves this way, and this connection needs to work in both directions, so the user will find the active setting there when opening the dialog. So, we need to limit ourselves to a closed set of choices, so the UI people can prepare translations. This is the design tradeoff systemd makes: better integration in exchange for less flexibility. Of course, it is simple to allow "custom" actions everywhere, but that starts to break the "well-integrated" behaviour. There is an old blog post from Joey Hess about how to implement a simple alarm clock, using systemd, and this shows both how powerful and how limited the system is. On one hand, the single line WakeSystem=true takes care of the entire dependency chain attached to making sure the system is actually on -- when the system is going to suspend mode, some program needs to determine that there are scheduled jobs that should wake up the system, determined which of these is the next one, program the wakeup time into the hardware and only then allow the suspend to continue. On the other hand, the descriptive framework already breaks down in this simple example, because the default behaviour of the system is to go to sleep if the lid switch is closed, and the easiest way to fix this is to tell systemd to ignore the lid switch while the program is running. The IPC permissions disallow scheduled user jobs from setting this, so a "custom" action is taken that sets up the "ignore the lid switch" policy, changes to an unprivileged user, and runs the actual job. So, it is possible to get back flexibility, but this is traded for integration: while the job is running, any functionality that the "suspend laptop when the lid is closed" button had, is broken. For something simple as a switch on a personal laptop that breaks if the user configures something that is non-standard, that is acceptable. However, the keywords in the service unit files, the timer files, etc. are also part of a descriptive framework, and the services using them expect that the things they promise are kept by the framework, so there is a limit to how many exceptions you can grant. SystemV init really comes from the other direction here. Nothing is promised to init scripts but a vague notion that they will be run with sufficient privileges. There is no dependency tracking that ensures that certain services will always start before others, just priorities. You can add a dependency system like insserv if you like and make sure that all the requirements for it to work (i.e. services declaring their dependencies) are given, and there is a safe fallback of not parallelizing so we can always play it safe. Because there is so little promised to scripts, writing them becomes a bit tedious -- all of them have a large case statement where they parse the command line, all of them need to reimplement dropping privileges and finding the PID of a running process. There are lots of helpers for the more common things, like the LSB standard functions for colorful and matching console output, and Debian's start-stop-daemon for privilege and PID file handling, but in the end it remains each script's responsibility. Systemd will never be able to provide the same flexibility to admins while at the same time keeping the promises made in the descriptive language, and sysvinit will never reach the same amount of easy integration. I think it is fairly safe to predict both of these things, and I'd even go a step further: if any one of them tried, I'd ask the project managers what they were thinking. The only way to keep systems as complex as these running is to limit the scope of the project. With systemd, the complexity is kept in the internals, and it is important for manageability that all possible interactions are well-defined. That is a huge issue when writing adventure games, where you need to define interaction rules for each player action in combination with each object, and each pair of objects that can be combined with the "Use" action. Where adventure games would default to "This doesn't seem to work." or "I cannot use this.", this would not be a helpful error message when we're trying to boot, so we really need to cut down on actions and object types here. Sysvinit, on the other hand, is both more extreme in limiting the scope to very few actions (restart, wait, once) and only one object type (process that can be started and may occasionally terminate) in the main program, and a lot more open in what scripts outside the main program are allowed to do. The prime example for this is that the majority of configuration takes place outside the inittab even -- a script is responsible for the interface that services are registered by dropping init scripts into specific directories, and scripts can also create additional interfaces, but none of this is intrinsic to the init system itself. Which system is the right choice for your system is just that: your choice. You would choose a starting point, and customize what is necessary for your computer to do what you want it to do. Here's the thing: most users will be entirely happy with fully uncustomized systemd. It will suspend your laptop if you close the lid, and even give your download manager veto power. I fully support Debian's decision to use systemd as the default init for new installs, because it makes sense to use the default that is good for the vast majority of users. However, my opening statement still stands: systemd can never support all use cases, because it is constrained by its design. Trying to change that would turn it into an unmaintainable mess, and the authors are right in rejecting this. There are use cases that define the project, and use cases that are out of scope. This is what we need sysvinit for: the bits that the systemd project managers rightfully reject, but that are someone's use case. Telling these users to get lost would be extremely bad style for the "universal operating system." For example, if someone needs a service that asks a database server for a list of virtual machines to start, runs each in its private network namespace that is named after replacing part of the virtual machine name with the last two octets of the host machine's IP address, then binds a virtual Ethernet device into the network namespace and sets up a NAT rule that allows devices to talk to the public Internet and a firewall rule to stop VMs from talking to each other. Such a beast would live outside of systemd's world view. You can easily start it, but systemd would not know how to monitor it (as long as there is some process still running, is that a good sign), not know how to shut down one instance, not know how to map processes to instances and so on. SystemV init has the advantage here that none of these things are defined by the system, and as long as my init script has some code to handle "status", I can easily check up on my processes with the same interface I'd use for everything else. This requires me to have more knowledge of how the init system works, that is true, however I'd also venture that the learning curve for sysvinit is shallower. If you know shell scripting, you can understand what init scripts do, without memorizing special rules that govern how certain keywords interact, and keeping track of changes to these rules as the feature set is expanded. That said, I'm looking at becoming the new maintainer of the sysvinit package so we can continue to ship it and give our users a choice, but I'd definitely need help, as I only have a few systems left that won't run properly with systemd because of its nasty habit to keep lots of code in memory where sysvinit would unload it after use (by letting the shell process end), and these don't make good test systems. The overall plan is to set up a CI system that Ideally, I'd like to reach a state where we can actually ensure that the default configuration is somewhat similar regardless of which init system we use, and that it is possible to install with either (this is a nasty regression in jessie: bootstrapping a foreign-architecture system broke). This will take time, and step one is probably looking at bugs first -- I'm grateful for any help I can get here, primarily I'd like to hear about regressions via bug reports. tl;dr: as a distribution, we need both systemd and sysvinit. If you suggest that systemd can replace sysvinit, I will probably understand that you think that the systemd maintainers don't know project management. I'm reluctantly looking at becoming the official maintainer because I think it is important to keep around.

30 January 2016

Simon McVittie: GNOME Developer Experience hackfest: xdg-app + Debian

Over the last few days I've been at the GNOME Developer Experience hackfest in Brussels, looking into xdg-app and how best to use it in Debian and Debian derivatives. xdg-app is basically a way to run "non-core" software on Linux distributions, analogous to apps on Android and iOS. It doesn't replace distributions like Debian or packaging systems, but it adds a layer above them. It's mostly aimed towards third-party apps obtained from somewhere that isn't your distribution vendor, aiming to address a few long-standing problems in that space: To address the first point, each application uses a specified "runtime", which is available as /usr inside its sandbox. This can be used to run application bundles with multiple, potentially incompatible sets of dependencies within the same desktop environment. A runtime can be updated within its branch - for instance, if an application uses the "GNOME 3.18" runtime (consisting of a basic Linux system, the GNOME 3.18 libraries, other related libraries like Mesa, and their recursive dependencies like libjpeg), it can expect to see minor-version updates from GNOME 3.18.x (including any security updates that might be necessary for the bundled libraries), but not a jump to GNOME 3.20. To address the second issue, the plan is for application bundles to be available as a single file, containing metadata (such as the runtime to use), the app itself, and any dependencies that are not available in the runtime (which the app vendor is responsible for updating if necessary). However, the primary way to distribute and upgrade runtimes and applications is to package them as OSTree repositories, which provide a git-like content-addressed filesystem, with efficient updates using binary deltas. The resulting files are hard-linked into place. To address the last point, application bundles run partially isolated from the wider system, using containerization techniques such as namespaces to prevent direct access to system resources. Resources from outside the sandbox can be accessed via "portal" services, which are responsible for access control; for example, the Documents portal (the only one, so far) displays an "Open" dialog outside the sandbox, then allows the application to access only the selected file. xdg-app for Debian One thing I've been doing at this hackfest is improving the existing Debian/Ubuntu packaging for xdg-app (and its dependencies ostree and libgsystem), aiming to get it into a state where I can upload it to Debian experimental. Because xdg-app aims to be a general freedesktop project, I'm currently intending to make it part of the "Utopia" packaging team alongside projects like D-Bus and polkit, but I'm open to suggestions if people want to co-maintain it elsewhere. In the process of updating xdg-app, I sent various patches to Alex, mostly fixing build and test issues, which are in the new 0.4.8 release. I'd appreciate co-maintainers and further testing for this stuff, particularly ostree: ostree is primarily a whole-OS deployment technology, which isn't a use-case that I've tested, and in particular ostree-grub2 probably doesn't work yet. Source code: Binaries (no trust path, so only use these if you have a test VM): The "Hello, World" of xdg-apps Another thing I set out to do here was to make a runtime and an app out of Debian packages. Most of the test applications in and around GNOME use the "freedesktop" or "GNOME" runtimes, which consist of a Yocto base system and lots of RPMs, are rebuilt from first principles on-demand, and are extensive and capable enough that they make it somewhat non-obvious what's in an app or a runtime. So, here's a step-by-step route through xdg-app, first using typical GNOME instructions, but then using the simplest GUI app I could find - xvt, a small xterm clone. I'm using a Debian testing (stretch) x86_64 virtual machine for all this. xdg-app currently requires systemd-logind to put users and apps in cgroups, either with systemd as pid 1 (systemd-sysv) or systemd-shim and cgmanager; I used the default systemd-sysv. In principle it could work with plain cgmanager, but nobody has contributed that support yet. Demonstrating an existing xdg-app Debian's kernel is currently patched to be able to allow unprivileged users to create user namespaces, but make it runtime-configurable, because there have been various security issues in that feature, making it a security risk for a typical machine (and particularly a server). Hopefully unprivileged user namespaces will soon be secure enough that we can enable them by default, but for now, we have to do one of three things to let xdg-app use them: First, we'll need a runtime. The standard xdg-app tutorial would tell you to download the "GNOME Platform" version 3.18. To do that, you'd add a remote, which is a bit like a git remote, and a bit like an apt repository:
$ wget http://sdk.gnome.org/keys/gnome-sdk.gpg
$ xdg-app remote-add --user --gpg-import=gnome-sdk.gpg gnome \
    http://sdk.gnome.org/repo/
(I'm ignoring considerations like trust paths and security here, for brevity; in real life, you'd want to obtain the signing key via https and/or have a trust path to it, just like you would for a secure-apt signing key.) You can list what's available in a remote:
$ xdg-app remote-ls --user gnome
...
org.freedesktop.Platform
...
org.freedesktop.Platform.Locale.cy
...
org.freedesktop.Sdk
...
org.gnome.Platform
...
The Platform runtimes are what we want here: they are collections of runtime libraries with which you can run an application. The Sdk runtimes add development tools, header files, etc. to be able to compile apps that will be compatible with the Platform. For now, all we want is the GNOME 3.18 platform:
$ xdg-app install --user gnome org.gnome.Platform 3.18
Next, we can install an app that uses it, from Alex Larsson's nightly builds of a subset of GNOME. The server they're on doesn't have a great deal of bandwidth, so be nice :-)
$ wget http://209.132.179.2/keys/nightly.gpg
$ xdg-app remote-add --user --gpg-import=nightly.gpg nightly \
    http://209.132.179.2/repo/
$ xdg-app install --user nightly org.mypaint.MypaintDevel
We now have one app, and the runtime it needs:
$ xdg-app list
org.mypaint.MypaintDevel
$ xdg-app run org.mypaint.MypaintDevel
[you see a GUI window]
Digression: what's in a runtime? Behind the scenes, xdg-app runtimes and apps are both OSTree trees. This means the ostree tool, from the package of the same name, can be used to inspect them.
$ sudo apt install ostree
$ ostree refs --repo ~/.local/share/xdg-app/repo
gnome:runtime/org.gnome.Platform/x86_64/3.18
nightly:app/org.mypaint.MypaintDevel/x86_64/master
A "ref" has roughly the same meaning as in git: something like a branch or a tag. ostree can list the directory tree that it represents:
$ ostree ls --repo ~/.local/share/xdg-app/repo \
    runtime/org.gnome.Platform/x86_64/3.18
d00755 0 0      0 /
-00644 0 0    493 /metadata
d00755 0 0      0 /files
$ ostree ls --repo ~/.local/share/xdg-app/repo \
    runtime/org.gnome.Platform/x86_64/3.18 /files
d00755 0 0      0 /files
l00777 0 0      0 /files/local -> ../var/usrlocal
l00777 0 0      0 /files/sbin -> bin
d00755 0 0      0 /files/bin
d00755 0 0      0 /files/cache
d00755 0 0      0 /files/etc
d00755 0 0      0 /files/games
d00755 0 0      0 /files/include
d00755 0 0      0 /files/lib
d00755 0 0      0 /files/lib64
d00755 0 0      0 /files/libexec
d00755 0 0      0 /files/share
d00755 0 0      0 /files/src
You can see that /files in a runtime is basically a copy of /usr. This is not coincidental: the runtime's /files gets mounted at /usr inside the xdg-app container. There is also some metadata, which is in the ini-like syntax seen in .desktop files:
$ ostree cat --repo ~/.local/share/xdg-app/repo \
    runtime/org.gnome.Platform/x86_64/3.18 /metadata
[Runtime]
name=org.gnome.Platform/x86_64/3.16
runtime=org.gnome.Platform/x86_64/3.16
sdk=org.gnome.Sdk/x86_64/3.16
[Extension org.freedesktop.Platform.GL]
version=1.2
directory=lib/GL
[Extension org.freedesktop.Platform.Timezones]
version=1.2
directory=share/zoneinfo
[Extension org.gnome.Platform.Locale]
directory=share/runtime/locale
subdirectories=true
[Environment]
GI_TYPELIB_PATH=/app/lib/girepository-1.0
GST_PLUGIN_PATH=/app/lib/gstreamer-1.0
LD_LIBRARY_PATH=/app/lib:/usr/lib/GL
Looking at an app, the situation is fairly similar:
$ ostree ls --repo ~/.local/share/xdg-app/repo \
    app/org.mypaint.MypaintDevel/x86_64/master
d00755 0 0      0 /
-00644 0 0    258 /metadata
d00755 0 0      0 /export
d00755 0 0      0 /files
This time, /files maps to what will become /app for the application, which was compiled with --prefix=/app:
$ ostree ls --repo ~/.local/share/xdg-app/repo \
    app/org.mypaint.MypaintDevel/x86_64/master /files
d00755 0 0      0 /files
-00644 0 0   4599 /files/manifest.json
d00755 0 0      0 /files/bin
d00755 0 0      0 /files/lib
d00755 0 0      0 /files/share
There is also a /export directory, which is made visible to the host system so that the contained app can appear as a "first-class citizen" in menus:
$ ostree ls --repo ~/.local/share/xdg-app/repo \
    app/org.mypaint.MypaintDevel/x86_64/master /export
d00755 0 0      0 /export
d00755 0 0      0 /export/share
user@debian:~$ ostree ls --repo ~/.local/share/xdg-app/repo \
    app/org.mypaint.MypaintDevel/x86_64/master /export/share
d00755 0 0      0 /export/share
d00755 0 0      0 /export/share/app-info
d00755 0 0      0 /export/share/applications
d00755 0 0      0 /export/share/icons
user@debian:~$ ostree ls --repo ~/.local/share/xdg-app/repo \
    app/org.mypaint.MypaintDevel/x86_64/master /export/share/applications
d00755 0 0      0 /export/share/applications
-00644 0 0    715 /export/share/applications/org.mypaint.MypaintDevel.desktop
$ ostree cat --repo ~/.local/share/xdg-app/repo \
    app/org.mypaint.MypaintDevel/x86_64/master \
    /export/share/applications/org.mypaint.MypaintDevel.desktop
[Desktop Entry]
Version=1.0
Name=(Nightly) MyPaint
TryExec=mypaint
Exec=mypaint %f
Comment=Painting program for digital artists
...
Comment[zh_HK]= 
GenericName=Raster Graphics Editor
GenericName[fr]= diteur d'Image Matricielle
MimeType=image/openraster;image/png;image/jpeg;
Type=Application
Icon=org.mypaint.MypaintDevel
StartupNotify=true
Categories=Graphics;GTK;2DGraphics;RasterGraphics;
Terminal=false
Again, there's some metadata:
$ ostree cat --repo ~/.local/share/xdg-app/repo \
    app/org.mypaint.MypaintDevel/x86_64/master /metadata
[Application]
name=org.mypaint.MypaintDevel
runtime=org.gnome.Platform/x86_64/3.18
sdk=org.gnome.Sdk/x86_64/3.18
command=mypaint
[Context]
shared=ipc;
sockets=x11;pulseaudio;
filesystems=host;
[Extension org.mypaint.MypaintDevel.Debug]
directory=lib/debug
Building a runtime, probably the wrong way The way in which the reference/demo runtimes and containers are generated is... involved. As far as I can tell, there's a base OS built using Yocto, and the actual GNOME bits come from RPMs. However, we don't need to go that far to get a working runtime. In preparing this runtime I'm probably completely ignoring some best-practices and tools - but it works, so it's good enough. First we'll need a repository:
$ sudo install -d -o$(id -nu) /srv/xdg-apps
$ ostree init --repo /srv/xdg-apps
I'm just keeping this local for this demonstration, but you could rsync it to a web server's exported directory or something - a lot like a git repository, it's just a collection of files. We want everything in /usr because that's what xdg-app expects, hence usrmerge:
$ sudo mount -t tmpfs -o mode=0755 tmpfs /mnt
$ sudo debootstrap --arch=amd64 --include=libx11-6,usrmerge \
    --variant=minbase stretch /mnt http://192.168.122.1:3142/debian
$ sudo mkdir /mnt/runtime
$ sudo mv /mnt/usr /mnt/runtime/files
This obviously has a lot of stuff in it that we don't need - most obviously init, apt and dpkg - but it's Good Enough . We will also need some metadata. This is sufficient:
$ sudo sh -c 'cat > /mnt/runtime/metadata'
[Runtime]
name=org.debian.Debootstrap/x86_64/8.20160130
runtime=org.debian.Debootstrap/x86_64/8.20160130
That's a runtime. We can commit it to ostree, and generate xdg-app metadata:
$ ostree commit --repo /srv/xdg-apps \
    --branch runtime/org.debian.Debootstrap/x86_64/8.20160130 \
    /mnt/runtime
$ fakeroot ostree commit --repo /srv/xdg-apps \
    --branch runtime/org.debian.Debootstrap/x86_64/8.20160130
$ fakeroot xdg-app build-update-repo /srv/xdg-apps
(I'm not sure why ostree and xdg-app report "Operation not permitted" when we aren't root or fakeroot - feedback welcome.) build-update-repo would presumably also be the right place to GPG-sign your repository, if you were doing that. We can add that as another xdg-app remote:
$ xdg-app remote-add --user --no-gpg-verify local file:///srv/xdg-apps
$ xdg-app remote-ls --user local
org.debian.Debootstrap
Building an app, probably the wrong way The right way to build an app is to build a "SDK" runtime - similar to that platform runtime, but with development files and tools - and recompile the app and any missing libraries with ./configure --prefix=/app && make && make install. I'm not going to do that, because simplicity is nice, and I'm reasonably sure xvt doesn't actually hard-code /usr into the binary:
$ install -d xvt-app/files/bin
$ sudo apt-get --download-only install xvt
$ dpkg-deb --fsys-tarfile /var/cache/apt/archives/xvt_2.1-20.1_amd64.deb \
      tar -xvf - ./usr/bin/xvt
./usr/
./usr/bin/
./usr/bin/xvt
...
$ mv usr xvt-app/files
Again, we'll need metadata, and it's much simpler than the more production-quality GNOME nightly builds:
$ cat > xvt-app/metadata
[Application]
name=org.debian.packages.xvt
runtime=org.debian.Debootstrap/x86_64/8.20160130
command=xvt
[Context]
sockets=x11;
$ fakeroot ostree commit --repo /srv/xdg-apps \
    --branch app/org.debian.packages.xvt/x86_64/2.1-20.1 xvt-app
$ fakeroot xdg-app build-update-repo /srv/xdg-apps
Updating appstream branch
No appstream data for runtime/org.debian.Debootstrap/x86_64/8.20160130
No appstream data for app/org.debian.packages.xvt/x86_64/2.1-20.1
Updating summary
$ xdg-app remote-ls --user local
org.debian.Debootstrap
org.debian.packages.xvt
The obligatory screenshot OK, good, now we can install it:
$ xdg-app install --user local org.debian.Debootstrap 8.20160130
$ xdg-app install --user local org.debian.packages.xvt 2.1-20.1
$ xdg-app run --branch=2.1-20.1 org.debian.packages.xvt
xvt in a container and you can play around with the shell in the xvt and see what you can and can't do in the container. I'm sure there were better ways to do most of this, but I think there's value in having such a simplistic demo to go alongside the various GNOMEish apps.
Acknowledgements: Thanks to all those!

29 December 2015

Ritesh Raj Sarraf: Device Mapper Multipath status in Debian

For Debian Jessie, the multipath support relied on sysvinit scripts. So, if you were using systemd, the level of testing would have been minimal. At DebConf15, I got to meet many people whom I'd worked with, over emails, over the years. With every person, my ask was to use the SAN Storage stack in a test environement, and report bugs early. Not after the next release. This applies also to the usual downstream distribution projects. That said, today, I spent time building a Root File System on SAN setup using the following stack, of the versions that'd be part of the next stable release:
  • Linux
  • Open-iSCSI Initiator
  • Device Mapper Multipath
  • LIO Target
I'm pretty happy that nothing much has changed in terms of setup, from what has already been documented in README.Debian files. The systemd integration has been very transparent. But that is my first hand experience. I'm request all users of the above mentioned stack to build the setup and report issues, if any. Please do not wait for the last minute of the release/freeze.
root@debian-sanboot:~# systemctl status -l multipath-tools
  multipathd.service - Device-Mapper Multipath Device Controller
   Loaded: loaded (/lib/systemd/system/multipathd.service; enabled; vendor preset: enabled)
   Active: active (running) since Tue 2015-12-29 18:38:58 IST; 1min 23s ago
  Process: 246 ExecStartPre=/sbin/modprobe dm-multipath (code=exited, status=0/SUCCESS)
 Main PID: 260 (multipathd)
   Status: "running"
   CGroup: /system.slice/multipathd.service
            260 /sbin/multipathd -d -s
Dec 29 18:39:04 debian-sanboot multipathd[260]: sdb [8:16]: path added to devmap sanroot
Dec 29 18:39:04 debian-sanboot multipathd[260]: sdc: add path (uevent)
Dec 29 18:39:04 debian-sanboot multipathd[260]: sanroot: load table [0 16777216 multipath 0 0 3 1 service-time 0 1 1 8:16 1 service-time 0 1 1 8:0 1 service-time 0 1 1 8:32 1]
Dec 29 18:39:04 debian-sanboot multipathd[260]: sdc [8:32]: path added to devmap sanroot
Dec 29 18:39:04 debian-sanboot multipathd[260]: sdd: add path (uevent)
Dec 29 18:39:04 debian-sanboot multipathd[260]: sanroot: load table [0 16777216 multipath 0 0 4 1 service-time 0 1 1 8:16 1 service-time 0 1 1 8:32 1 service-time 0 1 1 8:48 1 service-time 0 1 1 8:0 1]
Dec 29 18:39:04 debian-sanboot multipathd[260]: sdd [8:48]: path added to devmap sanroot
Dec 29 18:39:13 debian-sanboot multipathd[260]: sanroot: sda - directio checker reports path is up
Dec 29 18:39:13 debian-sanboot multipathd[260]: 8:0: reinstated
Dec 29 18:39:13 debian-sanboot multipathd[260]: sanroot: remaining active paths: 4
root@debian-sanboot:~# multipath -ll
sanroot (36001405ead943c8222140268e019ba49) dm-0 LIO-ORG,IBLOCK
size=8.0G features='0' hwhandler='0' wp=rw
 -+- policy='service-time 0' prio=1 status=active
   - 4:0:0:0 sdb 8:16 active ready running
 -+- policy='service-time 0' prio=1 status=enabled
   - 3:0:0:0 sdc 8:32 active ready running
 -+- policy='service-time 0' prio=1 status=enabled
   - 5:0:0:0 sdd 8:48 active ready running
 -+- policy='service-time 0' prio=1 status=enabled
   - 2:0:0:0 sda 8:0  active ready running
root@debian-sanboot:~# iscsiadm -m session
tcp: [1] 172.16.20.40:3260,1 iqn.2003-01.org.linux-iscsi.debian.sanboot (non-flash)
tcp: [2] 172.16.20.41:3260,1 iqn.2003-01.org.linux-iscsi.debian.sanboot (non-flash)
tcp: [3] 172.16.20.42:3260,1 iqn.2003-01.org.linux-iscsi.debian.sanboot (non-flash)
tcp: [4] 172.16.20.43:3260,1 iqn.2003-01.org.linux-iscsi.debian.sanboot (non-flash)
root@debian-sanboot:~# mount   grep sanroot
/dev/mapper/sanroot on / type ext4 (rw,relatime,errors=remount-ro,stripe=8191,data=ordered)

Categories:

Keywords:

Like:

4 November 2015

Rapha&#235;l Hertzog: My Free Software Activities in October 2015

My monthly report covers a large part of what I have been doing in the free software world. I write it for my donators (thanks to them!) but also for the wider Debian community because it can give ideas to newcomers and it s one of the best ways to find volunteers to work with me on projects that matter to me. Debian LTS This month I have been paid to work 13.25 hours on Debian LTS. During this time I worked on the following things: I also started a conversation about what paid contributors could work on if they have some spare cycle as the current funding level might allow us to invest some time on work outside of just plain security updates. The Debian Administrator s Handbook I spent quite some time finalizing the Jessie book update, both for the content and for the layout of the printed book. Debian Handbook: cover of the jessie edition Misc Debian work GNOME 3.18. I uploaded a new gnome-shell-timer working with GNOME Shell 3.18 and I filed bugs #800660 and #802480 about an annoying gnome-keyring regression I did multiple test rounds with the Debian maintainers (Dmitry Shachnev, kudos to him!) and the upstream developers (see here and here). Apart from those regressions, I like GNOME 3.18! Python-modules team migration to Git. After the Git migration, and since the team policy now imposes usage of git-dpm on all members, I made some tries with it on the python-django package while pushing version 1.8.5 to experimental. And the least I can say is that I m not pleased with the result. I thus filed 3 bugs summarizing the problems I have with git-dpm: #801666 (no way to set the upstream branch names from within the repository), #801667 (no clean way to merge between packaging branches), #801668 (does not create upstream tag immediately on tarball import). That is on top of other randomly stupid bugs that were already reported like #801548 (does not work with perfectly valid pre-existing upstream tags). Django packaging. I filed bugs on all packages build-depending on python-django that fail to build with Django 1.8 and informed them that I would upload Django 1.8 to unstable in early November (it s done already). Then I fixed python-django-jsonfield myself since Distro Tracker relies on this package. Following this small mass-bug filing, I filed a wishlist bug on devscripts to improve the mass-bug helper script (see #801926). And since I used ratt to rebuild the packages, I filed a wishlist issue on this new tool as well. Tryton 3.6 upgrade. I upgraded my own Tryton installation to version 3.6 and filed bug #803066 because the SysV init script was not working properly. That also reminded me that the DD process of Matthias Behrle (the tryton package maintainer) was stalled due to a bug in the NM infrastructure so I pinged the NM team and we sorted out a way for me to advocate him and get his process going Distro Tracker. I continued my work to refactor the way we handle incoming mail processing (branch people/hertzog/mailprocessing). It s now mostly finished and I need to deploy it in a test environment before being able to roll it out on tracker.debian.org. Thanks See you next month for a new summary of my activities.

No comment Liked this article? Click here. My blog is Flattr-enabled.

26 October 2015

Craig Small: ps standards and locales

I looked at two interesting issues today around the ps program in the procps project. One had a solution and the other I m puzzled about. ps User-defined Format Issue #9 was quite the puzzle. The output of ps changed depending if a different option had a hyphen before it or not. First, the expected output
$ ps p $$ -o pid=pid,comm=comm
 pid comm
31612 bash
Next, the unusual output.
$ ps -p $$ -o pid=pid,comm=comm
pid,comm=comm
 31612
The difference being the second we have -p not p. The second unexpected thing about this is, it was designed that way. Unix98 standard apparently permits this sort of craziness. To me it is a useless feature that will more likely than not confuse people. Within ps, depending on what sort of flags you start with, you use a sysv parser or a bsd parser. One of them triggered the Unix98 compatibility option while the other did not, hence the change in behavior. The next version of procps will ship with a ps that has the user-defined output format of the first example. I had a look at the latest standard, IEEE 1003.1-2013, doesn t seem to have anything like that in it. Short Month Length This one has got me stuck. A user has reported in issue #5 that when they use their locale columns such as start time get mis-aligned because their short month is longer than 3 characters. They also mention some other columns for memory etc are not long enough but that s a generic problem that is impossible to fix sensibly. OK, for the month problem the fix would be to know what the month length is and set the column width for those columns to that plus a bit more for the other fixed components, simple really. Except; how do you know for a specific locale what their short month length is? I always assumed it was three! I haven t found anything that has this information. Note, I m not looking for strlen() but some function that has the maximum length for short month names (e.g. Jan, Feb, Mar etc). This also got me thinking how safe some of those date to string functions are if you have a static buffer as the destination. It s not safe to assume they will be DD MMM YYYY because there might be more Ms. So if you know how to work out the short month name length, let me know!

7 October 2015

Emanuele Rocca: systemd is your friend

Today I want to talk a bit about some cool features of systemd, the default Debian init system since the release of Jessie. Ubuntu has also adopted systemd in 15.04, meaning that you are going to find it literally everywhere.
Logging The component responsible for logging in systemd is called journal. It collects and stores logs in a structured, indexed journal (hence the name). The journal can replace traditional syslog daemons such as rsyslog and syslog-ng, or work together with them. By default Debian keeps on using rsyslog, but if you don't need to ship logs to a centralized server (or do other fancy things) it is possible to stop using rsyslog right now and rely on systemd-journal instead. The obvious question is: why would anybody use a binary format for logs instead of a bunch of tried and true plain-text files? As it turns out, there are quite a lot of good reasons to do so. The killer features of systemd-journald for me are:
  • Index tons of logs while being able to search with good performance: O(log(n)) instead of O(n) which is what you get with text files
  • No need to worry about log rotation anymore, in any shape or form
The last point in particular is really critical in my opinion. Traditional log rotation implementations rely on cron jobs to check how much disk space is used by logs, compressing/removing old files. Log rotation is usually: 1) annoying to configure; 2) hard to get right; 3) prone to DoS attacks. With journald, there is pretty much nothing to configure. Log rotation is built into the daemon disk space allocation logic itself. This also allows to avoid vulnerability windows due to time-based rotation, which is what you get with logrotate and friends. Enough high-level discussions though, here is how to use the journal! Check if you already have the directory /var/log/journal, otherwise create it (as root). Then restart systemd-journald as follows: sudo systemctl restart systemd-journald You can get all messages produced since the last boot with journalctl -b. All messages produced today can get extracted using journalctl --since=today. Want to get all logs related to ssh? Try with journalctl _SYSTEMD_UNIT=ssh.service. There are many more filtering options available, you can read all about them with man journalctl. journald's configuration file is /etc/systemd/journald.conf. Two of the most interesting options are SystemMaxUse and SystemKeepFree, which can be used to change the amount of disk space dedicated to logging. They default to 10% and 15% of the /var filesystem respectively. Here is a little cheatsheet:
journalctl -b                # Show all messages since last boot
journalctl -f                # Tail your logs
journalctl --since=yesterday # Show all messages produced since yesterday
journalctl -pcrit            # Filter messages by priority
journalctl /bin/su           # Filter messages by program
journalctl --disk-usage      # The amount of space in use for journaling
Further reading:
Containers A relatively little known component of systemd is systemd-nspawn. It is a small, straightforward container manager. If you don't already have a chroot somewhere, here is how to create a basic Debian Jessie chroot under /srv/chroots/jessie:
$ debootstrap jessie /srv/chroots/jessie http://http.debian.net/debian/
With systemd-nspawn you can easily run a shell inside the chroot:
$ sudo systemd-nspawn -D /srv/chroots/jessie
Spawning container jessie on /srv/chroots/jessie.
Press ^] three times within 1s to kill container.
/etc/localtime is not a symlink, not updating container timezone.
root@jessie:~#
Done. Everything works out of the box: no need for you to mount /dev, /run and friends, systemd-nspawn took care of that. Networking also works. If you want to actually boot the system, just add the -b switch to the previous command:
$ sudo systemd-nspawn -b -D /srv/chroots/jessie
Spawning container jessie on /srv/chroots/jessie.
Press ^] three times within 1s to kill container.
/etc/localtime is not a symlink, not updating container timezone.
systemd 215 running in system mode. (+PAM +AUDIT +SELINUX +IMA +SYSVINIT +LIBCRYPTSETUP +GCRYPT +ACL +XZ -SECCOMP -APPARMOR)
Detected virtualization 'systemd-nspawn'.
Detected architecture 'x86-64'.
Welcome to Debian GNU/Linux jessie/sid!
Set hostname to <orion>.
[  OK  ] Reached target Remote File Systems (Pre).
[  OK  ] Reached target Encrypted Volumes.
[  OK  ] Reached target Paths.
[  OK  ] Reached target Swap.
[  OK  ] Created slice Root Slice.
[  OK  ] Created slice User and Session Slice.
[  OK  ] Listening on /dev/initctl Compatibility Named Pipe.
[  OK  ] Listening on Delayed Shutdown Socket.
[  OK  ] Listening on Journal Socket (/dev/log).
[  OK  ] Listening on Journal Socket.
[  OK  ] Created slice System Slice.
[  OK  ] Created slice system-getty.slice.
[  OK  ] Listening on Syslog Socket.
         Mounting POSIX Message Queue File System...
         Mounting Huge Pages File System...
         Mounting FUSE Control File System...
         Starting Copy rules generated while the root was ro...
         Starting Journal Service...
[  OK  ] Started Journal Service.
[  OK  ] Reached target Slices.
         Starting Remount Root and Kernel File Systems...
[  OK  ] Mounted Huge Pages File System.
[  OK  ] Mounted POSIX Message Queue File System.
[  OK  ] Mounted FUSE Control File System.
[  OK  ] Started Copy rules generated while the root was ro.
[  OK  ] Started Remount Root and Kernel File Systems.
         Starting Load/Save Random Seed...
[  OK  ] Reached target Local File Systems (Pre).
[  OK  ] Reached target Local File Systems.
         Starting Create Volatile Files and Directories...
[  OK  ] Reached target Remote File Systems.
         Starting Trigger Flushing of Journal to Persistent Storage...
[  OK  ] Started Load/Save Random Seed.
         Starting LSB: Raise network interfaces....
[  OK  ] Started Create Volatile Files and Directories.
         Starting Update UTMP about System Boot/Shutdown...
[  OK  ] Started Trigger Flushing of Journal to Persistent Storage.
[  OK  ] Started Update UTMP about System Boot/Shutdown.
[  OK  ] Started LSB: Raise network interfaces..
[  OK  ] Reached target Network.
[  OK  ] Reached target Network is Online.
[  OK  ] Reached target System Initialization.
[  OK  ] Listening on D-Bus System Message Bus Socket.
[  OK  ] Reached target Sockets.
[  OK  ] Reached target Timers.
[  OK  ] Reached target Basic System.
         Starting /etc/rc.local Compatibility...
         Starting Login Service...
         Starting LSB: Regular background program processing daemon...
         Starting D-Bus System Message Bus...
[  OK  ] Started D-Bus System Message Bus.
         Starting System Logging Service...
[  OK  ] Started System Logging Service.
         Starting Permit User Sessions...
[  OK  ] Started /etc/rc.local Compatibility.
[  OK  ] Started LSB: Regular background program processing daemon.
         Starting Cleanup of Temporary Directories...
[  OK  ] Started Permit User Sessions.
         Starting Console Getty...
[  OK  ] Started Console Getty.
[  OK  ] Reached target Login Prompts.
[  OK  ] Started Login Service.
[  OK  ] Reached target Multi-User System.
[  OK  ] Reached target Graphical Interface.
         Starting Update UTMP about System Runlevel Changes...
[  OK  ] Started Cleanup of Temporary Directories.
[  OK  ] Started Update UTMP about System Runlevel Changes.
Debian GNU/Linux jessie/sid orion console
orion login:
That's it! Just one command to start a shell in your chroot or boot the container, again zero configuration needed. Finally, systemd provides a command called machinectl that allows you to introspect and control your container:
$ sudo machinectl status jessie
jessie
           Since: Wed 2015-10-07 11:22:56 CEST; 55min ago
          Leader: 32468 (systemd)
         Service: nspawn; class container
            Root: /srv/chroots/jessie
         Address: fe80::8e70:5aff:fe81:2290
                  192.168.122.1
                  192.168.1.13
              OS: Debian GNU/Linux jessie/sid
            Unit: machine-jessie.scope
                   32468 /lib/systemd/systemd
                   system.slice
                     dbus.service
                       32534 /usr/bin/dbus-daemon --system --address=systemd: --nofork --nopidfile -...
                     cron.service
                       32539 /usr/sbin/cron
                     systemd-journald.service
                       32487 /lib/systemd/systemd-journald
                     systemd-logind.service
                       32532 /lib/systemd/systemd-logind
                     console-getty.service
                       32544 /sbin/agetty --noclear --keep-baud console 115200 38400 9600 vt102
                     rsyslog.service
                       32540 /usr/sbin/rsyslogd -n
With machinectl you can also reboot, poweroff, terminate your containers and more. There are so many things to learn about systemd and containers! Here are some references. This stuff is pretty exciting. Now that all major distributions use systemd by default, we can expect to have access to tools like journalctl and systemd-nspawn everywhere!

30 September 2015

Chris Lamb: Free software activities in September 2015

Inspired by Rapha l Hertzog, here is a monthly update covering a large part of what I have been doing in the free software world:
Debian The Reproducible Builds project was also covered in depth on LWN as well as in Lunar's weekly reports (#18, #19, #20, #21, #22).
Uploads
  • redis A new upstream release, as well as overhauling the systemd configuration, maintaining feature parity with sysvinit and adding various security hardening features.
  • python-redis Attempting to get its Debian Continuous Integration tests to pass successfully.
  • libfiu Ensuring we do not FTBFS under exotic locales.
  • gunicorn Dropping a dependency on python-tox now that tests are disabled.



RC bugs


I also filed FTBFS bugs against actdiag, actdiag, bangarang, bmon, bppphyview, cervisia, choqok, cinnamon-control-center, clasp, composer, cpl-plugin-naco, dirspec, django-countries, dmapi, dolphin-plugins, dulwich, elki, eqonomize, eztrace, fontmatrix, freedink, galera-3, golang-git2go, golang-github-golang-leveldb, gopher, gst-plugins-bad0.10, jbofihe, k3b, kalgebra, kbibtex, kde-baseapps, kde-dev-utils, kdesdk-kioslaves, kdesvn, kdevelop-php-docs, kdewebdev, kftpgrabber, kile, kmess, kmix, kmldonkey, knights, konsole4, kpartsplugin, kplayer, kraft, krecipes, krusader, ktp-auth-handler, ktp-common-internals, ktp-text-ui, libdevice-cdio-perl, libdr-tarantool-perl, libevent-rpc-perl, libmime-util-java, libmoosex-app-cmd-perl, libmoosex-app-cmd-perl, librdkafka, libxml-easyobj-perl, maven-dependency-plugin, mmtk, murano-dashboard, node-expat, node-iconv, node-raw-body, node-srs, node-websocket, ocaml-estring, ocaml-estring, oce, odb, oslo-config, oslo.messaging, ovirt-guest-agent, packagesearch, php-svn, php5-midgard2, phpunit-story, pike8.0, plasma-widget-adjustableclock, plowshare4, procps, pygpgme, pylibmc, pyroma, python-admesh, python-bleach, python-dmidecode, python-libdiscid, python-mne, python-mne, python-nmap, python-nmap, python-oslo.middleware, python-riemann-client, python-traceback2, qdjango, qsapecng, ruby-em-synchrony, ruby-ffi-rzmq, ruby-nokogiri, ruby-opengraph-parser, ruby-thread-safe, shortuuid, skrooge, smb4k, snp-sites, soprano, stopmotion, subtitlecomposer, svgpart, thin-provisioning-tools, umbrello, validator.js, vdr-plugin-prefermenu, vdr-plugin-vnsiserver, vdr-plugin-weather, webkitkde, xbmc-pvr-addons, xfsdump & zanshin.

14 September 2015

Francois Marier: Setting up a network scanner using SANE

Sharing a scanner over the network using SANE is fairly straightforward. Here's how I shared a scanner on a server (running Debian jessie) with a client (running Ubuntu trusty).

Install SANE The packages you need on both the client and the server are: You should check whether or your scanner is supported by the latest stable release or by the latest development version. In my case, I needed to get a Canon LiDE 220 working so I had to grab the libsane 1.0.25+git20150528-1 package from Debian experimental.

Test the scanner locally Once you have SANE installed, you can test it out locally to confirm that it detects your scanner:
scanimage -L
This should give you output similar to this:
device  genesys:libusb:001:006' is a Canon LiDE 220 flatbed scanner
If that doesn't work, make sure that the scanner is actually detected by the USB stack:
$ lsusb   grep Canon
Bus 001 Device 006: ID 04a9:190f Canon, Inc.
and that its USB ID shows up in the SANE backend it needs:
$ grep 190f /etc/sane.d/genesys.conf 
usb 0x04a9 0x190f
To do a test scan, simply run:
scanimage > test.ppm
and then take a look at the (greyscale) image it produced (test.ppm).

Configure the server With the scanner working locally, it's time to expose it to network clients by adding the client IP addresses to /etc/sane.d/saned.conf:
## Access list
192.168.1.3
and then opening the appropriate port on your firewall (typically /etc/network/iptables in Debian):
-A INPUT -s 192.168.1.3 -p tcp --dport 6566 -j ACCEPT
Then you need to ensure that the SANE server is running by setting the following in /etc/default/saned:
RUN=yes
if you're using the sysv init system, or by running this command:
systemctl enable saned.socket
if using systemd. I actually had to reboot to make saned visible to systemd, so if you still run into these errors:
$ service saned start
Failed to start saned.service: Unit saned.service is masked.
you're probably just one reboot away from getting it to work.

Configure the client On the client, all you need to do is add the following to /etc/sane.d/net.conf:
connect_timeout = 60
myserver
where myserver is the hostname or IP address of the server running saned.

Test the scanner remotely With everything in place, you should be able to see the scanner from the client computer:
$ scanimage -L
device  net:myserver:genesys:libusb:001:006' is a Canon LiDE 220 flatbed scanner
and successfully perform a test scan using this command:
scanimage > test.ppm

27 August 2015

Ritesh Raj Sarraf: Laptop Mode Tools - 1.68

I am please to announce the release of Laptop Mode Tools, version 1.68. This release is mainly focused on integration with the newer init system, systemd. Without the help from the awesome Debian systemd maintainers, this would not have been possible. Thank you folks. While the focus now is on systemd, LMT will still support the older SysV Init. With this new release, there are some new files: laptop-mode.service, laptop-mode.timer and lmt-poll.service. All the files should be documented well enough for users. lmt-poll.service is the equivalent of the module battery-level-polling, should you need it. Filtered git log:
1.68 - Thu Aug 27 22:36:43 IST 2015
    * Fix all instances for BATTERY_LEVEL_POLLING
    * Group kill the polling daemon so that its child process get the same signal
    * Release the descriptor explicitly
    * Add identifier about who's our parent
    * Narrow down our power_supply subsystem event check condition
    * Fine tune the .service file
    * On my ultrabook, AC as reported as ACAD
    * Enhance lmt-udev to better work with systemd
    * Add a timer based polling for LMT. It is the equivalent of battery-polling-daemon,
      using systemd
    * Disable battery level polling by default, because most systems will have systemd running
    * Add documentation reference in systemd files
The md5 checksum for the tarball is 15edf643990e08deaebebf66b128b270

Categories:

Keywords:

Like:

Next.

Previous.